jiesutd / LatticeLSTM

Chinese NER using Lattice LSTM. Code for ACL 2018 paper.
1.8k stars 453 forks source link

运行自己的数据训练,decode过程出错 #48

Closed Veyronl closed 5 years ago

Veyronl commented 5 years ago

CuDNN: True GPU available: False Status: decode Seg: True Train file: data/conll03/train.bmes Dev file: data/conll03/dev.bmes Test file: data/conll03/test.bmes Raw file: ./rd_data/test/test.txt Char emb: data/gigaword_chn.all.a2b.uni.ite50.vec Bichar emb: None Gaz file: data/ctb.50d.vec Data setting loaded from file: ./rd_data/test/test.dset DATA SUMMARY START: Tag scheme: BMES MAX SENTENCE LENGTH: 250 MAX WORD LENGTH: -1 Number normalized: False Use bigram: False Word alphabet size: 2596 Biword alphabet size: 31940 Char alphabet size: 2596 Gaz alphabet size: 13634 Label alphabet size: 18 Word embedding size: 50 Biword embedding size: 50 Char embedding size: 30 Gaz embedding size: 50 Norm word emb: True Norm biword emb: True Norm gaz emb: False Norm gaz dropout: 0.5 Train instance number: 0 Dev instance number: 0 Test instance number: 0 Raw instance number: 0 Hyperpara iteration: 100 Hyperpara batch size: 1 Hyperpara lr: 0.015 Hyperpara lr_decay: 0.05 Hyperpara HP_clip: 5.0 Hyperpara momentum: 0 Hyperpara hidden_dim: 200 Hyperpara dropout: 0.5 Hyperpara lstm_layer: 1 Hyperpara bilstm: True Hyperpara GPU: False Hyperpara use_gaz: True Hyperpara fix gaz emb: False Hyperpara use_char: False DATA SUMMARY END. Load Model from file: ./rd_data/test/demo_test.6.model build batched lstmcrf... build batched bilstm... build LatticeLSTM... forward , Fix emb: False gaz drop: 0.5 load pretrain word emb... (13634, 50) build LatticeLSTM... backward , Fix emb: False gaz drop: 0.5 load pretrain word emb... (13634, 50) build batched crf... Traceback (most recent call last): File "main_test.py", line 454, in decode_results = load_model_decode(model_dir, data, 'raw', gpu, seg) File "main_test.py", line 348, in load_model_decode model.load_state_dict(torch.load(model_dir)) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 487, in load_state_dict .format(name, own_state[name].size(), param.size())) RuntimeError: While copying the parameter named lstm.word_embeddings.weight, whose dimensions in the model are torch.Size([2596, 50]) and whose dimensions in the checkpoint are torch.Size([2527, 50]).

只修改过main文件,run_demo.sh文件

jiesutd commented 5 years ago

Please show me how did you change the main file.

Veyronl commented 5 years ago

main.py拷贝了个main_test .py只修改了下data.HP_iteration, run_demo.sh修改了对应的训练/测试/验证文件路径,指向main_test .py, 可以正常训练完成,但是predict过程,出现: RuntimeError: While copying the parameter named lstm.word_embeddings.weight, whose dimensions in the model are torch.Size([2596, 50]) and whose dimensions in the checkpoint are torch.Size([2527, 50]).

jiesutd commented 5 years ago

This is because of when loading the model, the .dset does not match the .model. Please confirm you are using the consistent *.dset and *.model, they should be generated in the same training process.

Veyronl commented 5 years ago

That could be the problem. Thank you. I have a try.

htt912 commented 5 years ago

作者,您好,我昨晚跑了你的代码,但是f1值只有0.408.用的是demo.train.char、demo.test.char、demo.dev.char数据集。是不是数据量太少的原因呢? 最近一直在研究你的论文,对这个领域很感兴趣,OntoNotes完整的划分好的数据集,作者您能分享么? 邮箱2733889576@qq.com 。

jiesutd commented 5 years ago

@htt912 , demo data is only used to verify if you configure the code correctly. You need to use real data to evaluate the model performance.