microsoft / IRNet

An algorithm for cross-domain NL2SQL
MIT License
264 stars 81 forks source link

Crashes without printing any error! #26

Closed shantanu-kumar242 closed 4 years ago

shantanu-kumar242 commented 4 years ago

When I run the train.sh file then after 2-3 minutes without giving any error it exits. I have sufficient amount (16GB) of RAM. I have attached the screenshot. Does anyone faced the same issue? How can it be resolved? Tried on colab as well as local machine. Same issue is there on both. Please help. issue

jaydeepb-inexture commented 4 years ago

@shantanu-kumar242 try with adding gpu_id and a name for folder sh train.sh 0[GPU_ID] model_result[directory_name] this directory will be created in the saved_model folder in which you have stored pretrained model as they have mentioned in READ.md file. if you want to see the errors than you can find it as directory_name.log if you have mentioned directory_name as i have said above.

shantanu-kumar242 commented 4 years ago

@shantanu-kumar242 try with adding gpu_id and a name for folder sh train.sh 0[GPU_ID] model_result[directory_name] this directory will be created in the saved_model folder in which you have stored pretrained model as they have mentioned in READ.md file. if you want to see the errors than you can find it as directory_name.log if you have mentioned directory_name as i have said above.

Thanks for help. I went through log file and found the following error: issue1 I ran the command as mentioned: conda install pytorch torchvision cudatoolkit=9.2 -c pytorch -c defaults -c numba/label/dev but still I am getting the same error. I am running this model on cpu as my system doesn't have gpu. Plz help

jaydeepb-inexture commented 4 years ago

which system are you using?

shantanu-kumar242 commented 4 years ago

which system are you using?

Linux

jaydeepb-inexture commented 4 years ago

https://pytorch.org/ go here find approriate option for your system and install pytorch select none in cuda as you have no gpu.

shantanu-kumar242 commented 4 years ago

Thanks a lot now it's working.