guotong1988 / NL2SQL-RULE

Content Enhanced BERT-based Text-to-SQL Generation https://arxiv.org/abs/1910.07179
188 stars 48 forks source link

Implemented train for testing on my dataset. #5

Closed Bill-ai closed 4 years ago

Bill-ai commented 4 years ago

corenlp.client.PermanentlyFailedException: Timed out waiting for service to come alive. The model asked for the Type question, when I typed question and the above error pops up? Do you know why this might be a problem? Please help.

PS: Calling the def infer function, Changed the args like --do_train to False --do_infer to True --infer_loop to True --EG to True and in the def infer block changed the args show_table to True and show_anwer_only to True.

Thanks Bill

guotong1988 commented 4 years ago

Start a local stanford-corenlp service.

Bill-ai commented 4 years ago

### Thanks, started the server and this pops up. I know there is already an issue created for this, but I didn't understand the answer given by you. Can you please explain it to me in a simpler way?****

Traceback (most recent call last): File "train.py", line 837, in beam_size=1, show_table=False, show_answer_only=True File "train.py", line 667, in infer beam_size=beam_size) File "/home/govind/Documents/Cricket/NL2SQL-BERT/sqlova/model/nl2sql/wikisql_models.py", line 115, in beam_forward knowledge=knowledge, knowledge_header=knowledge_header) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, *kwargs) File "/home/govind/Documents/Cricket/NL2SQL-BERT/sqlova/model/nl2sql/wikisql_models.py", line 562, in forward knowledge = [k + (mL_n - len(k)) [0] for k in knowledge] TypeError: 'NoneType' object is not iterable

PS: I'm using the def infer below the input, but you were talking that infer API is not working and thus to use test? Please correct me if I'm wrong.

Best Bill

guotong1988 commented 4 years ago

Prepare the data as the readme written. image

Bill-ai commented 4 years ago

Took the same example of readMe file and pasted it into ctable.tables.json in data_and_model folder.


python3 train.py 
BERT-type: uncased_L-12_H-768_A-12
Batch_size = 32
BERT parameters:
learning rate: 1e-05
Fine-tune BERT: False
vocab size: 30522
hidden_size: 768
num_hidden_layer: 12
num_attention_heads: 12
hidden_act: gelu
intermediate_size: 3072
hidden_dropout_prob: 0.1
attention_probs_dropout_prob: 0.1
max_position_embeddings: 512
type_vocab_size: 2
initializer_range: 0.02
Load pre-trained parameters.
Seq-to-SQL: the number of final BERT layers to be used: 2
Seq-to-SQL: the size of hidden dimension = 100
Seq-to-SQL: LSTM encoding layer size = 2
Seq-to-SQL: dropout rate = 0.3
Seq-to-SQL: learning rate = 0.001
Type question: what
Traceback (most recent call last):
  File "train.py", line 837, in <module>
    beam_size=1, show_table=False, show_answer_only=True
  File "train.py", line 653, in infer
    hds1 = tb1['header']
KeyError: 'header'

Had a look into train.py the def infer has a variable assigned hsa1 which accesses header. I'm I doing something wrong?

guotong1988 commented 4 years ago

You have to debug and recode..

Bill-ai commented 4 years ago

I tried the repo from scratch, but still getting the same error. In ctable.tables.json I have added the same sample data input in the first line. Can you please help me out here? I don't understand if there is no header in your input sample data, but you still want to access it through def infer? How is that possible?

guotong1988 commented 4 years ago

https://drive.google.com/file/d/1iJvsf38f16el58H4NPINQ7uzal5-V4v4

the table data is here

guotong1988 commented 4 years ago

my sample data is the questions and SQLs. The table data is not uploaded.

Bill-ai commented 4 years ago

Okay, thanks for the clarity, But the thing is I'm actually doing an inference. So I don't need the questions and sqls I guess? I have already downloaded the table data file, I see some tables and jsonl files and uncased vocab and model files also.

Can you tell me what should I insert into ctable.tables.json file to run the main.py file for inference? Or Quick way you can tell is how can I convert the CSV file to get queries directly by running main.py. Please tell me how do I preprocess the CSV file? and what changes do I need to have in main.py? Or send me a file which I can directly run for inference.

I'm sorry for the long conversation. Can I email you up?

sarkaramal commented 4 years ago

Hi, I am also facing the same issue. Is there any update on this. Can u share the test for influence?

guotong1988 commented 4 years ago

Inference is not supported now.