LeeSureman / Batch_Parallel_LatticeLSTM

Chinese NER using Lattice LSTM. Reproduction for ACL 2018 paper.
130 stars 16 forks source link

您好,请问您在运行原代码时有出现过这样的错误吗? #12

Closed yujh123 closed 4 years ago

yujh123 commented 4 years ago

log如下所示:

CuDNN: True GPU available: True Status: train Seg: True Train file: data/ResumeNER/train.char.bmes Dev file: data/ResumeNER/dev.char.bmes Test file: data/ResumeNER/test.char.bmes Raw file: None Char emb: data/gigaword_chn.all.a2b.uni.ite50.vec Bichar emb: None Gaz file: data/ctb.50d.vec Model saved to: data/model/saved_model.lstmcrf. Load gaz file: gaz alphabet size: 9462 gaz alphabet size: 10095 gaz alphabet size: 10714 build word pretrain emb... Embedding: pretrain word:11327, prefect match:1884, case_match:0, oov:10, oov%:0.005277044854881266 build biword pretrain emb... Embedding: pretrain word:0, prefect match:0, case_match:0, oov:21407, oov%:0.999953288490284 build gaz pretrain emb... Embedding: pretrain word:704368, prefect match:10712, case_match:0, oov:1, oov%:9.333582228859437e-05 Training model... DATA SUMMARY START: Tag scheme: BMES MAX SENTENCE LENGTH: 250 MAX WORD LENGTH: -1 Number normalized: True Use bigram: False Word alphabet size: 1895 Biword alphabet size: 21408 Char alphabet size: 1895 Gaz alphabet size: 10714 Label alphabet size: 29 Word embedding size: 50 Biword embedding size: 50 Char embedding size: 30 Gaz embedding size: 50 Norm word emb: True Norm biword emb: True Norm gaz emb: False Norm gaz dropout: 0.5 Train instance number: 3821 Dev instance number: 463 Test instance number: 477 Raw instance number: 0 Hyperpara iteration: 100 Hyperpara batch size: 1 Hyperpara lr: 0.015 Hyperpara lr_decay: 0.05 Hyperpara HP_clip: 5.0 Hyperpara momentum: 0 Hyperpara hidden_dim: 200 Hyperpara dropout: 0.5 Hyperpara lstm_layer: 1 Hyperpara bilstm: True Hyperpara GPU: True Hyperpara use_gaz: True Hyperpara fix gaz emb: False Hyperpara use_char: False DATA SUMMARY END. Data setting saved to file: data/model/savedmodel.lstmcrf..dset build batched lstmcrf... build batched bilstm... build LatticeLSTM... forward , Fix emb: False gaz drop: 0.5 load pretrain word emb... (10714, 50) E:\论文\问答系统\Lattice LSTM\LatticeLSTM\model\latticelstm.py:105: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal. init.orthogonal(self.weightih.data) E:\论文\问答系统\Lattice LSTM\LatticeLSTM\model\latticelstm.py:106: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal. init.orthogonal(self.alpha_weightih.data) E:\论文\问答系统\Lattice LSTM\LatticeLSTM\model\latticelstm.py:120: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant. init.constant(self.bias.data, val=0) E:\论文\问答系统\Lattice LSTM\LatticeLSTM\model\latticelstm.py:121: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. init.constant(self.alphabias.data, val=0) E:\论文\问答系统\Lattice LSTM\LatticeLSTM\model\latticelstm.py:37: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal. init.orthogonal(self.weightih.data) E:\论文\问答系统\Lattice LSTM\LatticeLSTM\model\latticelstm.py:44: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant. init.constant(self.bias.data, val=0) build LatticeLSTM... backward , Fix emb: False gaz drop: 0.5 load pretrain word emb... (10714, 50) build batched crf... finished built model. Epoch: 0/100 Learning rate is setted as: 0.015

Traceback (most recent call last): File "E:/论文/LatticeLSTM/main.py", line 443, in train(data, save_model_dir, seg) File "E:/论文/LatticeLSTM/main.py", line 279, in train instance, gpu) File "E:/论文/LatticeLSTM/main.py", line 195, in batchify_with_label mask[idx, :seqlen] = torch.Tensor([1]*seqlen) TypeError: mul(): argument 'other' (position 1) must be Tensor, not list

GabrielThompson commented 4 years ago

好像是pytorch版本的问题 改成np.repeat([1],seqlen)就好了