LeeSureman / Batch_Parallel_LatticeLSTM

Chinese NER using Lattice LSTM. Reproduction for ACL 2018 paper.
130 stars 16 forks source link

Win10运行报错IndexError: tensors used as indices must be long, byte or bool tensors #17

Closed Giaurora closed 4 years ago

Giaurora commented 4 years ago

运行代码的时候,报错

Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\pydevd.py", line 1758, in main() File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\pydevd.py", line 1752, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev\pydevd.py", line 1147, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2019.1.3\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/NWJ/Downloads/Batch_Lattice/main_without_fitlog.py", line 192, in callbacks=callbacks) File "C:\Users\NWJ\anaconda3\envs\BatchLattice\lib\site-packages\fastNLP\core\trainer.py", line 520, in init batch_size=check_batch_size) File "C:\Users\NWJ\anaconda3\envs\BatchLattice\lib\site-packages\fastNLP\core\trainer.py", line 920, in _check_code pred_dict = model(refined_batch_x) File "C:\Users\NWJ\anaconda3\envs\BatchLattice\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(*input, *kwargs) File "C:\Users\NWJ\Downloads\Batch_Lattice\models.py", line 197, in forward embed_word = self.word_embed(skips_l2r_word) File "C:\Users\NWJ\anaconda3\envs\BatchLattice\lib\site-packages\torch\nn\modules\module.py", line 493, in call result = self.forward(input, kwargs) File "C:\Users\NWJ\anaconda3\envs\BatchLattice\lib\site-packages\fastNLP\embeddings\static_embedding.py", line 282, in forward words = self.words_to_words[words] IndexError: tensors used as indices must be long, byte or bool tensors

环境: Win10 64位 python 3.7.3 fastNLP 0.5.0 pytorch-cpu 1.1.0 numpy 1.18.1 请问怎么解决?

LeeSureman commented 4 years ago

你可以先试下在linux上跑pytorch的gpu版本有没有问题

980202006 commented 4 years ago

这个我是将每个索引前面,做了一个int64类型转换。

LeeSureman commented 4 years ago

可以,你解决了

Giaurora commented 4 years ago

@980202006 可以了,谢谢,不过我后面又遇到运行到一半程序终止的问题,你有遇到过嘛

{0: 'O', 1: 'I-PER.NOM', 2: 'I-PER.NAM', 3: 'B-PER.NOM', 4: 'B-PER.NAM', 5: 'I-ORG.NAM', 6: 'I-GPE.NAM', 7: 'B-GPE.NAM', 8: 'B-ORG.NAM', 9: 'I-LOC.NAM', 10: 'I-LOC.NOM', 11: 'I-ORG.NOM', 12: 'B-LOC.NAM', 13: 'B-LOC.NOM', 14: 'B-ORG.NOM', 15: 'B-GPE.NOM', 16: 'I-GPE.NOM'} nn.init.constant(self.bias.data, val=0) input fields after batch(if batch size is 1): chars: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1, 26]) target: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1, 26]) bigrams: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1, 26]) seq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1]) skips_l2r_source: (1)type:torch.Tensor (2)dtype:torch.int32, (3)shape:torch.Size([1, 26, 2]) skips_l2r_word: (1)type:torch.Tensor (2)dtype:torch.int32, (3)shape:torch.Size([1, 26, 2]) skips_r2l_source: (1)type:torch.Tensor (2)dtype:torch.int32, (3)shape:torch.Size([1, 26, 2]) skips_r2l_word: (1)type:torch.Tensor (2)dtype:torch.int32, (3)shape:torch.Size([1, 26, 2]) lexicon_count: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1, 26]) lexicon_count_back: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1, 26]) target fields after batch(if batch size is 1): target: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1, 26]) seq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([1])

Process finished with exit code -1073741676 (0xC0000094)

LeeSureman commented 4 years ago

这个好像是除0错误,你看看这个一般是会在什么时候出现

Giaurora commented 4 years ago

@LeeSureman 已解决,torch.nn.functional.py的dropout函数,因为传入的input为0报的错,可以直接返回input

LeeSureman commented 4 years ago

恭喜,受教了