Based on the idea that Decomposition for Enhancing Attention, we propose the workflow paradigm method named DEA-SQL with five major steps as shown in Figure. Check out our paper for more information.
# 1. Clone the repo
git clone https://github.com/FlyingFeather/DEA-SQL.git
cd DEA-SQL && mkdir data
# 2. Make a conda environment
conda create -n deasql python=3.9
conda activate deasql
# 3. Install requirements
pip install -r requirements.txt
python nltk_downloader.py
Download the data set from the spider official website under DEA-SQL
, unzip it and put it into the data
folder.
We provide the data in drive if it is unable to download dataset from spider official website.
mkdir data
unzip spider.zip -d data
The directory structure should be as follows:
.
āāā argsparser.py
āāā common
āāā correct_sql.py
āāā data
āĀ Ā āāā spider
ā āāā ...
ā āāā database
āāā data_preprocess.py
āāā docs
āāā evaluation
āāā fewshot
āāā filter_characters.py
āāā gen_sql.py
āāā get_ner.py
āāā hardness_eval.py
āāā __init__.py
āāā LICENSE
āāā llm
āāā logger.py
āāā main.py
āāā nltk_downloader.py
āāā outputs
āāā prompt
āāā README.md
āāā requirements.txt
āāā single_eval.py
Please modify the OpenAI configuration in common/static_config.py
and configure the relevant environment variables for the Azure OpenAI API.
Several important parameters:
python main.py --save_file_name "dea-sql.txt" --dataset "spider" --mode "dev" --sample "False" --few_shot_mode "masked_ques_sim" --insert_value 3 --embedding_base_model "openai" --sc_filter_nums 3 --few_shot_data "train_merge_v5"
For the first evaluation, please perform: python nltk_downloader.py
python evaluation/test-suite-sql-eval/evaluation.py --gold "evaluation/gold_files/spider_dev_gold.sql" --pred "outputs/spider/dea-sql.txt" --db ./data/spider/database --print_file_name "outputs/spider/spider-dea-sql.txt" --table './data/spider/tables.json' --etype exec
@article{xie2024decomposition,
title={Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm},
author={Yuanzhen Xie and Xinzhou Jin and Tao Xie and MingXiong Lin and Liang Chen and Chenyun Yu and Lei Cheng and ChengXiang Zhuo and Bo Hu and Zang Li},
journal={arXiv preprint arXiv:2402.10671},
year={2024}
}