Closed ghost closed 3 years ago
E:\Python\python.exe C:/Users/Bing/PycharmProjects/AGGCN-master/train.py --id 1 --seed 0 --hidden_dim 300 --lr 0.7 --rnn_hidden 300 --num_epoch 100 --pooling max --mlp_layers 1 --num_layers 2 --pooling_l2 0.002 Vocab size 375 loaded from file Loading data_dir from dataset/tacred with batch size 50... 1 batches created for dataset/tacred/train.json 1 batches created for dataset/tacred/dev.json Config saved to file ./saved_models/01/config.json Overwriting old vocab_dir file at ./saved_models/01/vocab_dir.pkl
Running with the following configs: data_dir : dataset/tacred vocab_dir : dataset/vocab_dir emb_dim : 300 ner_dim : 30 pos_dim : 30 hidden_dim : 300 num_layers : 2 input_dropout : 0.5 gcn_dropout : 0.5 word_dropout : 0.04 topn : 10000000000.0 lower : False heads : 3 sublayer_first : 2 sublayer_second : 4 pooling : max pooling_l2 : 0.002 mlp_layers : 1 no_adj : False rnn : True rnn_hidden : 300 rnn_layers : 1 rnn_dropout : 0.5 lr : 0.7 lr_decay : 0.9 decay_epoch : 5 optim : sgd num_epoch : 100 batch_size : 50 max_grad_norm : 5.0 log_step : 20 log : logs.txt save_epoch : 100 save_dir : ./saved_models id : 1 info : seed : 0 cuda : False cpu : False load : False model_file : None num_class : 42 vocab_size : 375 model_save_dir : ./saved_models/01
Finetune all embeddings.
E:\Python\lib\site-packages\torch\nn\modules\rnn.py:38: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1
"num_layers={}".format(dropout, num_layers))
THCudaCheck FAIL file=..\src\THC\THCGeneral.cpp line=70 error=38 : no CUDA-capable device is detected
Traceback (most recent call last):
File "C:/Users/Bing/PycharmProjects/AGGCN-master/train.py", line 119, in
Process finished with exit code 1
THCudaCheck FAIL file=..\src\THC\THCGeneral.cpp line=70 error=38 : no CUDA-capable device is detected Traceback (most recent call last):
Which means you did not run the model on a device with GPU. Our model requires Pytorch with the GPU version. Maybe you can consider running it by using AWS or other similar services.
I use your enviroment, bug I meet some issues,I use your code in windows10, bug it tell me "no CUDA-capable device is detected at ..\src\THC\THCGeneral.cpp:70"