xiaojunxu / SQLNet

Neural Network for generating structured queries from natural language.
BSD 3-Clause "New" or "Revised" License
427 stars 162 forks source link

Errors during training : Help needed ! #35

Open arcontechnologies opened 6 years ago

arcontechnologies commented 6 years ago

Python : 3.6 OS : windows 10

Dear all,

I tried to figure out what is going wrong but due to my limited knowledge, I'm still facing some issues :

1 / First : without changing anything to the code, I receiving this error :

(base) C:\Users\albel\Documents\SQLNet>python train.py --ca
Loading from original dataset
Loading data from data/train_tok.jsonl
Loading data from data/train_tok.tables.jsonl
Loading data from data/dev_tok.jsonl
Loading data from data/dev_tok.tables.jsonl
Loading data from data/test_tok.jsonl
Loading data from data/test_tok.tables.jsonl
Loading word embedding from glove/glove.42B.300d.txt
Using fixed embedding
Using column attention on aggregator predicting
Using column attention on selection predicting
Using column attention on where predicting
C:\Users\albel\Documents\SQLNet\sqlnet\model\modules\aggregator_predict.py:55: UserWarning: Implicit dimension choice for softmax has been deprecated. Change 
Init dev acc_qm: 0.0
  breakdown on (agg, sel, where): [0.09250683 0.17895737 0.        ]
Epoch 1 @ 2018-08-20 14:06:54.446966
Traceback (most recent call last):
  File "train.py", line 128, in <module>
    sql_data, table_data, TRAIN_ENTRY))
  File "C:\Users\albel\Documents\SQLNet\sqlnet\utils.py", line 144, in epoch_train
    loss = model.loss(score, ans_seq, pred_entry, gt_where_seq)
  File "C:\Users\albel\Documents\SQLNet\sqlnet\model\sqlnet.py", line 152, in loss
    data = torch.from_numpy(np.array(agg_truth))
**TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: double, float, float16, int64, int32, and uint8.** 

2/secondly : When I'm forcing the "dtype = float32" but I tried also the others and I'm still getting another error. Whatever I 'm doing to force the type of "data" variable, I'm still getting errors.

(base) C:\Users\albel\Documents\SQLNet>python train.py --ca
Loading from original dataset
Loading data from data/train_tok.jsonl
Loading data from data/train_tok.tables.jsonl
Loading data from data/dev_tok.jsonl
Loading data from data/dev_tok.tables.jsonl
Loading data from data/test_tok.jsonl
Loading data from data/test_tok.tables.jsonl
Loading word embedding from glove/glove.42B.300d.txt
Using fixed embedding
Using column attention on aggregator predicting
Using column attention on selection predicting
Using column attention on where predicting

Init dev acc_qm: 0.0
  breakdown on (agg, sel, where): [0.03811899 0.14772592 0.        ]
Epoch 1 @ 2018-08-20 13:58:02.098906
Traceback (most recent call last):
  File "train.py", line 128, in <module>
    sql_data, table_data, TRAIN_ENTRY))
  File "C:\Users\albel\Documents\SQLNet\sqlnet\utils.py", line 144, in epoch_train
    loss = model.loss(score, ans_seq, pred_entry, gt_where_seq)
  File "C:\Users\albel\Documents\SQLNet\sqlnet\model\sqlnet.py", line 152, in loss
    _**data = torch.from_numpy(np.array(agg_truth,dtype=np.float32))**_
**TypeError: float() argument must be a string or a number, not 'map'**

Can someone guides me to solve this ? Thanks in advance.

pprassu commented 5 years ago

Hi, Thanks for share your great effort. It's really amazing.

I am running the SQLNet project on GPU server with python 2.7 and I got the below error

python train.py --toy Loading from original dataset Loading data from data/train_tok.jsonl Loading data from data/train_tok.tables.jsonl Loading data from data/dev_tok.jsonl Loading data from data/dev_tok.tables.jsonl Loading data from data/test_tok.jsonl Loading data from data/test_tok.tables.jsonl Loading word embedding from glove/glove.42B.300d.txt Using fixed embedding Not using column attention on aggregator predicting Not using column attention on selection predicting Not using column attention on where predicting /raid/home/ppotipir/SQLNet/sqlnet/model/modules/aggregator_predict.py:55: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. att = self.softmax(att_val) /raid/home/ppotipir/SQLNet/sqlnet/model/modules/selection_predict.py:55: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. att = self.softmax(att_val) /raid/home/ppotipir/SQLNet/sqlnet/model/modules/sqlnet_condition_predict.py:123: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. num_col_att = self.softmax(num_col_att_val) /raid/home/ppotipir/SQLNet/sqlnet/model/modules/sqlnet_condition_predict.py:138: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. num_att = self.softmax(num_att_val) /raid/home/ppotipir/SQLNet/sqlnet/model/modules/sqlnet_condition_predict.py:163: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. col_att = self.softmax(col_att_val) /raid/home/ppotipir/SQLNet/sqlnet/model/modules/sqlnet_condition_predict.py:209: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. op_att = self.softmax(op_att_val) Init dev acc_qm: 0.0 breakdown on (agg, sel, where): [0.061 0.15 0.021] Epoch 1 @ 2019-03-02 06:30:13.671761 Traceback (most recent call last): File "train.py", line 128, in sql_data, table_data, TRAIN_ENTRY) File "/raid/home/ppotipir/SQLNet/sqlnet/utils.py", line 145, in epoch_train cum_loss += loss.data.cpu().numpy()[0]*(ed - st) IndexError: too many indices for array (venv_py27) ppotipir@starpoc-gpu-02:~/SQLNet$ python train.p

Could you please help to resolve this issue.

Thanks you.

nomkat commented 5 years ago

nit dev acc_qm: 0.0 breakdown on (agg, sel, where): [0.03811899 0.14772592 0. ] Epoch 1 @ 2018-08-20 13:58:02.098906

was this part solved? i am having a similar problem

siddharthrawal121 commented 3 years ago

@pprassu @arcontechnologies @nomkat. Did anyone solved it?