Tencent / NeuralNLP-NeuralClassifier

An Open-source Neural Hierarchical Multi-label Text Classification Toolkit
Other
1.83k stars 402 forks source link

运行python train.py conf/train.hmcn.json 报错RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor #115

Closed ArtificialZeng closed 1 year ago

ArtificialZeng commented 2 years ago

Shrink dict over. Size of doc_label dict is 102 Size of doc_token dict is 95439 Size of doc_char dict is 59 Size of doc_token_ngram dict is 0 Size of doc_keyword dict is 0 Size of doc_topic dict is 0 Traceback (most recent call last): File "train.py", line 261, in train(config) File "train.py", line 228, in train trainer.train(train_data_loader, model, optimizer, "Train", epoch) File "train.py", line 102, in train ModeType.TRAIN) File "train.py", line 121, in run logits = model(batch) File "E:\A_IDE\Anaconda3\envs\pp212_gpu\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "G:\googleDownload\NeuralNLP-NeuralClassifier-master\NeuralNLP-NeuralClassifier-master\model\classification\hmcn.py", line 98, in forward output, last_hidden = self.rnn(embedding, length) File "E:\A_IDE\Anaconda3\envs\pp212_gpu\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(input, **kwargs) File "G:\googleDownload\NeuralNLP-NeuralClassifier-master\NeuralNLP-NeuralClassifier-master\model\rnn.py", line 83, in forward sorted_inputs, sorted_seq_lengths, batch_first=self.batch_first) File "E:\A_IDE\Anaconda3\envs\pp212_gpu\lib\site-packages\torch\nn\utils\rnn.py", line 244, in pack_padded_sequence _VF._pack_padded_sequence(input, lengths, batch_first) RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

jamestang0219 commented 1 year ago

Shrink dict over. Size of doc_label dict is 102 Size of doc_token dict is 95439 Size of doc_char dict is 59 Size of doc_token_ngram dict is 0 Size of doc_keyword dict is 0 Size of doc_topic dict is 0 Traceback (most recent call last): File "train.py", line 261, in train(config) File "train.py", line 228, in train trainer.train(train_data_loader, model, optimizer, "Train", epoch) File "train.py", line 102, in train ModeType.TRAIN) File "train.py", line 121, in run logits = model(batch) File "E:\A_IDE\Anaconda3\envs\pp212_gpu\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "G:\googleDownload\NeuralNLP-NeuralClassifier-master\NeuralNLP-NeuralClassifier-master\model\classification\hmcn.py", line 98, in forward output, last_hidden = self.rnn(embedding, length) File "E:\A_IDE\Anaconda3\envs\pp212_gpu\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(input, **kwargs) File "G:\googleDownload\NeuralNLP-NeuralClassifier-master\NeuralNLP-NeuralClassifier-master\model\rnn.py", line 83, in forward sorted_inputs, sorted_seq_lengths, batch_first=self.batch_first) File "E:\A_IDE\Anaconda3\envs\pp212_gpu\lib\site-packages\torch\nn\utils\rnn.py", line 244, in pack_padded_sequence _VF._pack_padded_sequence(input, lengths, batch_first) RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

这里由于pack_padded_sequence这个方法只能接收非cuda的lengths,已经在jytoui的mr中修复