stanleylsx / entity_extractor_by_ner

基于Tensorflow2.3开发的NER模型,都是CRF范式,包含Bilstm(IDCNN)-CRF、Bert-Bilstm(IDCNN)-CRF、Bert-CRF,可微调预训练模型,可对抗学习,用于命名实体识别,配置后可直接运行。
390 stars 73 forks source link

训练为啥前几个 准确率啥的都是-1呢?是程序的设定吗? #29

Closed tyn513 closed 2 years ago

tyn513 commented 2 years ago

3%|▎ | 20/725 [00:33<17:52, 1.52s/it]training batch: 20, loss: 29.96524, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.852 6%|▌ | 40/725 [01:04<17:31, 1.53s/it]training batch: 40, loss: 24.01720, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.889 8%|▊ | 60/725 [01:36<17:31, 1.58s/it]training batch: 60, loss: 21.02634, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.821 11%|█ | 80/725 [02:09<17:02, 1.59s/it]training batch: 80, loss: 17.43504, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.859 14%|█▍ | 100/725 [02:42<17:20, 1.66s/it]training batch: 100, loss: 12.46207, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.908 17%|█▋ | 120/725 [03:16<17:41, 1.75s/it]training batch: 120, loss: 9.91527, precision: 1.000 recall: 0.029 f1: 0.057 accuracy: 0.923 19%|█▉ | 140/725 [03:49<15:39, 1.61s/it]training batch: 140, loss: 11.09897, precision: 0.636 recall: 0.146 f1: 0.237 accuracy: 0.915

lmy86263 commented 2 years ago

3%|▎ | 20/725 [00:33<17:52, 1.52s/it]training batch: 20, loss: 29.96524, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.852 6%|▌ | 40/725 [01:04<17:31, 1.53s/it]training batch: 40, loss: 24.01720, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.889 8%|▊ | 60/725 [01:36<17:31, 1.58s/it]training batch: 60, loss: 21.02634, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.821 11%|█ | 80/725 [02:09<17:02, 1.59s/it]training batch: 80, loss: 17.43504, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.859 14%|█▍ | 100/725 [02:42<17:20, 1.66s/it]training batch: 100, loss: 12.46207, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.908 17%|█▋ | 120/725 [03:16<17:41, 1.75s/it]training batch: 120, loss: 9.91527, precision: 1.000 recall: 0.029 f1: 0.057 accuracy: 0.923 19%|█▉ | 140/725 [03:49<15:39, 1.61s/it]training batch: 140, loss: 11.09897, precision: 0.636 recall: 0.146 f1: 0.237 accuracy: 0.915

你是哪个模型?遇到了同样的问题。

lmy86263 commented 2 years ago

3%|▎ | 20/725 [00:33<17:52, 1.52s/it]training batch: 20, loss: 29.96524, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.852 6%|▌ | 40/725 [01:04<17:31, 1.53s/it]training batch: 40, loss: 24.01720, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.889 8%|▊ | 60/725 [01:36<17:31, 1.58s/it]training batch: 60, loss: 21.02634, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.821 11%|█ | 80/725 [02:09<17:02, 1.59s/it]training batch: 80, loss: 17.43504, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.859 14%|█▍ | 100/725 [02:42<17:20, 1.66s/it]training batch: 100, loss: 12.46207, precision: -1.000 recall: 0.000 f1: -1.000 accuracy: 0.908 17%|█▋ | 120/725 [03:16<17:41, 1.75s/it]training batch: 120, loss: 9.91527, precision: 1.000 recall: 0.029 f1: 0.057 accuracy: 0.923 19%|█▉ | 140/725 [03:49<15:39, 1.61s/it]training batch: 140, loss: 11.09897, precision: 0.636 recall: 0.146 f1: 0.237 accuracy: 0.915

你是哪个模型?遇到了同样的问题。

看了源代码,发现作者使用的是exact match,如果没有match上,则使用初始化值,也就是-1,因此在前几轮中可能由于训练不充分,会出现-1,夸张的是,我用finetune bert+crf的前几个epoch都是-1,所以结论是:代码没有问题(或者,作者可以添加一些友好的提示,不成熟的建议。)

Kyro-Beluga commented 2 years ago

我也是这个问题。但是我都训练了20个epoch了,还是-1。除了batch_size改小了为24,其他都没变。感觉很无奈。

stanleylsx commented 2 years ago

把learning_rate改小,设定为5e-5

stanleylsx commented 2 years ago

已经修改为指标初始为0.0