Content area
In supervised fine-tuning (SFT) for Text2SQL tasks, particularly for databases with numerous tables, encoding schema features requires excessive tokens, escalating GPU resource requirements during fine-tuning. To bridge this gap, we propose LR-SQL, a general dual-model SFT framework comprising a schema linking model and an SQL generation model. At the core of our framework lies the schema linking model, which is trained on a novel downstream task termed slice-based related table filtering. This task dynamically partitions a database into adjustable slices of tables and sequentially evaluates the relevance of each slice to the input query, thereby reducing token consumption per iteration. However, slicing fragments destroys database information, impairing the model’s ability to comprehend the complete database. Thus, we integrate Chain of Thought (CoT) in training, enabling the model to reconstruct the full database context from discrete slices, thereby enhancing inference fidelity. Ultimately, the SQL generation model uses the result from the schema linking model to generate the final SQL. Extensive experiments demonstrate that our proposed LR-SQL reduces total GPU memory usage by 40% compared to baseline SFT methods, with only a 2% drop in table prediction accuracy for the schema linking task and a negligible 0.6% decrease in overall Text2SQL Execution Accuracy.
Details
; Zhang Yongpan 2 ; Pan, Su 1
; Sun, Yuwei 1 ; Lu Pengwei 2 ; Cheng, Ding 2 1 School of Internet of Things, Nanjing University of Posts and Telecommunications, New Model Road, Nanjing 210003, China; [email protected] (W.W.); [email protected] (Y.S.)
2 China Telecom Co., Ltd., Jiangsu Branch, Yun Jin Road, Nanjing 210019, China; [email protected] (Y.Z.); [email protected] (P.L.); [email protected] (C.D.)