Closed Biubiulity closed 5 years ago
根据https://github.com/jiesutd/LatticeLSTM/issues/84 ,需要把gaz dropout改成0.1
@LeeSureman 感谢帮忙回复。
不过我看这个运行log 其中 oov 高的不正常
pretrain word:11327, prefect match:84, case_match:0, oov:3161, oov%:0.9738139248305607
因此我怀疑你的输入输出文件有问题。
对比https://github.com/jiesutd/LatticeLSTM/issues/84 的运行log, 最起码那个OOV低很多。
pretrain word:11327, prefect match:3281, case_match:0, oov:75, oov%:0.0223413762288
@LeeSureman 您好方便分享一下您用的weibo数据集吗?
https://github.com/hltcoe/golden-horse 我用的是data里带有2nd的三个文件
你好,我运行代码是出现了这个错,请教一下是什么意思呀?
Data setting saved to file: ./data/demo.dset
build batched lstmcrf...
build batched bilstm...
build LatticeLSTM... forward , Fix emb: False gaz drop: 0.5
load pretrain word emb... (13652, 50)
/home/amax/g/LatticeLSTM-master/model/latticelstm.py:104: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal_.
init.orthogonal(self.weightih.data)
/home/amax/g/LatticeLSTM-master/model/latticelstm.py:105: UserWarning: nn.init.orthogonal is now deprecated in favor of nn.init.orthogonal.
init.orthogonal(self.alpha_weight_ih.data)
Traceback (most recent call last):
File "main.py", line 436, in with torch.no_grad():
block.
For example, change:
x.data.set_(y)
to:
with torch.nograd():
x.set(y)
@mingxixixi 请确认Python和PyTorch 版本使用正确
@mingxixixi 请确认Python和PyTorch 版本使用正确
你好,我用的是pytorch3.0.1,按评论里的改了之后还有这样的错? RuntimeError: invalid argument 1: the number of sizes provided must be greater or equal to the number of dimensions in the tensor at /pytorch/torch/lib/THC/generic/THCTensor.c:326
https://github.com/hltcoe/golden-horse 我用的是data里带有2nd的三个文件
@LeeSureman 您好,我看到2nd文件里面数据集是 字position label 格式,请问你如何处理的
https://github.com/hltcoe/golden-horse 我用的是data里带有2nd的三个文件
@LeeSureman 您好,我看到2nd文件里面数据集是 字position label 格式,请问你如何处理的
那个数字应该是分词信息,直接去掉就好
https://github.com/hltcoe/golden-horse 我用的是data里带有2nd的三个文件
@LeeSureman 您好,我看到2nd文件里面数据集是 字position label 格式,请问你如何处理的
那个数字应该是分词信息,直接去掉就好
感谢回答, @LeeSureman ,请问一下您有跑过msra数据集吗?里面的dev数据集没有,请问您是怎么修改main.py里面数据集路径的。非常感谢!
请问,yangjie_word_char_mix.txt文件方便分享下吗?我在作者分享的链接里,没有找到这个文件呢?2960240482@qq.com
日志文件如下:
D:\Anaconda3\python.exe G:/experiment/main.py CuDNN: True GPU available: False Status: train Seg: True Train file: ./data/onto4ner.cn/weiboNER.conll.train Dev file: ./data/onto4ner.cn/weiboNER.conll.dev Test file: ./data/onto4ner.cn/weiboNER.conll.test Raw file: ./data/onto4ner.cn/weiboNER.conll.test Char emb: data/gigaword_chn.all.a2b.uni.ite50.vec Bichar emb: None Gaz file: data/ctb.50d.vec Model saved to: ./data/onto4ner.cn/saved_model Load gaz file: data/ctb.50d.vec total size: 629353 gaz alphabet size: 677 gaz alphabet size: 767 gaz alphabet size: 858 build word pretrain emb... Embedding: pretrain word:11327, prefect match:84, case_match:0, oov:3161, oov%:0.9738139248305607 build biword pretrain emb... Embedding: pretrain word:0, prefect match:0, case_match:0, oov:42277, oov%:0.9999763470362837 build gaz pretrain emb... Embedding: pretrain word:704368, prefect match:847, case_match:0, oov:10, oov%:0.011655011655011656 Training model... DATA SUMMARY START: Tag scheme: BIO MAX SENTENCE LENGTH: 250 MAX WORD LENGTH: -1 Number normalized: True Use bigram: False Word alphabet size: 3246 Biword alphabet size: 42278 Char alphabet size: 158 Gaz alphabet size: 858 Label alphabet size: 16 Word embedding size: 50 Biword embedding size: 50 Char embedding size: 50 Gaz embedding size: 50 Norm word emb: True Norm biword emb: True Norm gaz emb: False Norm gaz dropout: 0.5 Train instance number: 1350 Dev instance number: 270 Test instance number: 270 Raw instance number: 0 Hyperpara iteration: 100 Hyperpara batch size: 1 Hyperpara lr: 0.015 Hyperpara lr_decay: 0.05 Hyperpara HP_clip: 5.0 Hyperpara momentum: 0 Hyperpara hidden_dim: 200 Hyperpara dropout: 0.5 Hyperpara lstm_layer: 1 Hyperpara bilstm: True Hyperpara GPU: False Hyperpara use_gaz: True Hyperpara fix gaz emb: False Hyperpara use_char: False DATA SUMMARY END. Data setting saved to file: ./data/onto4ner.cn/saved_model.dset build batched lstmcrf... build batched bilstm... build LatticeLSTM... forward , Fix emb: False gaz drop: 0.5 load pretrain word emb... (858, 50) build LatticeLSTM... backward , Fix emb: False gaz drop: 0.5 load pretrain word emb... (858, 50) build batched crf... finished built model. Epoch: 0/100 Learning rate is setted as: 0.015 Instance: 500; Time: 48.44s; loss: 6876.1025; acc: 25840/27265=0.9477 Instance: 1000; Time: 34.44s; loss: 5515.9680; acc: 51667/54737=0.9439 Instance: 1350; Time: 25.46s; loss: 3288.7032; acc: 69619/73780=0.9436 Epoch: 0 training finished. Time: 108.34s, speed: 12.46st/s, total loss: 15680.77370262146 gold_num = 301 pred_num = 0 right_num = 0 Dev: time: 6.29s, speed: 43.05st/s; acc: 0.9428, p: -1.0000, r: 0.0000, f: -1.0000 gold_num = 310 pred_num = 0 right_num = 0 Test: time: 6.66s, speed: 40.62st/s; acc: 0.9448, p: -1.0000, r: 0.0000, f: -1.0000 Epoch: 1/100 Learning rate is setted as: 0.014249999999999999 Instance: 500; Time: 37.29s; loss: 4565.2451; acc: 25885/27435=0.9435 Instance: 1000; Time: 35.23s; loss: 3729.4443; acc: 51309/54336=0.9443 Instance: 1350; Time: 25.63s; loss: 2592.1435; acc: 69696/73780=0.9446 Epoch: 1 training finished. Time: 98.14s, speed: 13.76st/s, total loss: 10886.832878112793 gold_num = 301 pred_num = 50 right_num = 34 Dev: time: 5.96s, speed: 45.42st/s; acc: 0.9481, p: 0.6800, r: 0.1130, f: 0.1937 Exceed previous best f score: -1 gold_num = 310 pred_num = 35 right_num = 22 Test: time: 6.93s, speed: 44.14st/s; acc: 0.9477, p: 0.6286, r: 0.0710, f: 0.1275 Epoch: 2/100 Learning rate is setted as: 0.0135375 Instance: 500; Time: 39.36s; loss: 2878.0598; acc: 25730/26938=0.9552 Instance: 1000; Time: 37.79s; loss: 3445.0646; acc: 52187/55068=0.9477 Instance: 1350; Time: 24.20s; loss: 2031.7176; acc: 69970/73780=0.9484 Epoch: 2 training finished. Time: 101.34s, speed: 13.32st/s, total loss: 8354.8420753479 gold_num = 301 pred_num = 110 right_num = 62 Dev: time: 5.87s, speed: 46.02st/s; acc: 0.9298, p: 0.5636, r: 0.2060, f: 0.3017 Exceed previous best f score: 0.19373219373219375 gold_num = 310 pred_num = 82 right_num = 34 Test: time: 5.76s, speed: 47.29st/s; acc: 0.9341, p: 0.4146, r: 0.1097, f: 0.1735 Epoch: 3/100 Learning rate is setted as: 0.012860624999999997 Instance: 500; Time: 37.55s; loss: 2718.9896; acc: 25936/27331=0.9490 Instance: 1000; Time: 36.35s; loss: 2820.8742; acc: 51557/54327=0.9490 Instance: 1350; Time: 25.09s; loss: 1802.1964; acc: 70014/73780=0.9490 Epoch: 3 training finished. Time: 98.99s, speed: 13.64st/s, total loss: 7342.060092926025 gold_num = 301 pred_num = 28 right_num = 21 Dev: time: 5.84s, speed: 46.27st/s; acc: 0.9466, p: 0.7500, r: 0.0698, f: 0.1277 gold_num = 310 pred_num = 18 right_num = 11 Test: time: 5.91s, speed: 45.79st/s; acc: 0.9472, p: 0.6111, r: 0.0355, f: 0.0671 Epoch: 4/100 Learning rate is setted as: 0.012217593749999998 Instance: 500; Time: 37.22s; loss: 2447.0826; acc: 27172/28385=0.9573 Instance: 1000; Time: 32.56s; loss: 2286.7070; acc: 51892/54389=0.9541 Instance: 1350; Time: 24.49s; loss: 1831.1641; acc: 70284/73780=0.9526 Epoch: 4 training finished. Time: 94.27s, speed: 14.32st/s, total loss: 6564.953800201416 gold_num = 301 pred_num = 152 right_num = 83 Dev: time: 6.27s, speed: 43.16st/s; acc: 0.9513, p: 0.5461, r: 0.2757, f: 0.3664 Exceed previous best f score: 0.3017031630170316 gold_num = 310 pred_num = 116 right_num = 55 Test: time: 7.61s, speed: 43.71st/s; acc: 0.9504, p: 0.4741, r: 0.1774, f: 0.2582
Epoch: 99/100 Learning rate is setted as: 9.348204032106312e-05 Instance: 500; Time: 36.70s; loss: 255.0433; acc: 28084/28217=0.9953 Instance: 1000; Time: 34.23s; loss: 216.6641; acc: 54065/54314=0.9954 Instance: 1350; Time: 25.20s; loss: 176.6801; acc: 73434/73780=0.9953 Epoch: 99 training finished. Time: 96.13s, speed: 14.04st/s, total loss: 648.3875198364258 gold_num = 301 pred_num = 214 right_num = 89 Dev: time: 6.04s, speed: 44.78st/s; acc: 0.9436, p: 0.4159, r: 0.2957, f: 0.3456 gold_num = 310 pred_num = 209 right_num = 87 Test: time: 6.19s, speed: 43.72st/s; acc: 0.9491, p: 0.4163, r: 0.2806, f: 0.3353
Process finished with exit code 0 请问一下问题出在哪里了呢?