yeyupiaoling / PPASR

基于PaddlePaddle实现端到端中文语音识别,从入门到实战,超简单的入门案例,超实用的企业项目。支持当前最流行的DeepSpeech2、Conformer、Squeezeformer模型
Apache License 2.0
797 stars 131 forks source link

loss率有更显著降低的办法么 #79

Closed a00147600 closed 2 years ago

a00147600 commented 2 years ago

我在近40轮训练中,loss率稳定在30左右了。我想知道我该怎么做会更显著降低loss率,另外 在训练过程中 我的显示器会周期的黑屏1-2秒是正常的吗?

  ======================================================================
[2022-06-14 10:04:15.285608] Test batch: [0/9], loss: 8.85325, cer: 0.51057
[2022-06-14 10:04:16.613047] Test epoch: 37, time/epoch: 0:28:18.052782, loss: 30.09566, cer: 0.40359
======================================================================
 ======================================================================
[2022-06-14 10:32:23.290007] Test batch: [0/9], loss: 8.87349, cer: 0.54807
[2022-06-14 10:32:24.579778] Test epoch: 38, time/epoch: 0:28:00.067478, loss: 30.03847, cer: 0.41201
======================================================================
 ======================================================================
[2022-06-14 11:01:58.274182] Test batch: [0/9], loss: 8.81034, cer: 0.50275
[2022-06-14 11:01:59.503255] Test epoch: 39, time/epoch: 0:29:31.949186, loss: 30.05826, cer: 0.41106
======================================================================

[2022-06-14 11:02:02.875096] 已保存模型:models/deepspeech2\epoch_39
[2022-06-14 11:02:08.028573] Train epoch: [40/65], batch: [0/4147], loss: 19.21426, learning rate: 0.00000295, eta: 14:13:12
[2022-06-14 11:02:53.789614] Train epoch: [40/65], batch: [100/4147], loss: 11.79564, learning rate: 0.00000295, eta: 13:41:12
[2022-06-14 11:03:48.832556] Train epoch: [40/65], batch: [200/4147], loss: 20.48766, learning rate: 0.00000295, eta: 16:27:04
[2022-06-14 11:04:32.172316] Train epoch: [40/65], batch: [300/4147], loss: 8.64755, learning rate: 0.00000295, eta: 12:56:21
[2022-06-14 11:05:15.402984] Train epoch: [40/65], batch: [400/4147], loss: 11.81774, learning rate: 0.00000295, eta: 12:53:52
[2022-06-14 11:05:56.997767] Train epoch: [40/65], batch: [500/4147], loss: 11.44166, learning rate: 0.00000295, eta: 12:23:49
[2022-06-14 11:06:40.901006] Train epoch: [40/65], batch: [600/4147], loss: 41.28703, learning rate: 0.00000295, eta: 13:04:26
[2022-06-14 11:07:20.041311] Train epoch: [40/65], batch: [700/4147], loss: 14.05299, learning rate: 0.00000295, eta: 11:38:08
[2022-06-14 11:08:00.037083] Train epoch: [40/65], batch: [800/4147], loss: 9.01056, learning rate: 0.00000295, eta: 11:53:13
[2022-06-14 11:08:40.347232] Train epoch: [40/65], batch: [900/4147], loss: 13.31743, learning rate: 0.00000295, eta: 11:58:08
[2022-06-14 11:09:23.945885] Train epoch: [40/65], batch: [1000/4147], loss: 12.23478, learning rate: 0.00000295, eta: 12:56:05
[2022-06-14 11:10:04.002096] Train epoch: [40/65], batch: [1100/4147], loss: 17.44375, learning rate: 0.00000295, eta: 11:52:20
[2022-06-14 11:10:44.046700] Train epoch: [40/65], batch: [1200/4147], loss: 11.56029, learning rate: 0.00000295, eta: 11:51:15
[2022-06-14 11:11:23.263974] Train epoch: [40/65], batch: [1300/4147], loss: 15.32164, learning rate: 0.00000295, eta: 11:36:03
[2022-06-14 11:12:00.514824] Train epoch: [40/65], batch: [1400/4147], loss: 10.65211, learning rate: 0.00000295, eta: 11:00:27
[2022-06-14 11:12:38.894485] Train epoch: [40/65], batch: [1500/4147], loss: 11.22235, learning rate: 0.00000295, eta: 11:20:00
[2022-06-14 11:13:16.905444] Train epoch: [40/65], batch: [1600/4147], loss: 43.41085, learning rate: 0.00000295, eta: 11:12:50
[2022-06-14 11:13:56.407279] Train epoch: [40/65], batch: [1700/4147], loss: 46.16658, learning rate: 0.00000295, eta: 11:38:04
[2022-06-14 11:14:37.041716] Train epoch: [40/65], batch: [1800/4147], loss: 11.70115, learning rate: 0.00000295, eta: 11:57:22
[2022-06-14 11:15:14.021829] Train epoch: [40/65], batch: [1900/4147], loss: 12.12171, learning rate: 0.00000295, eta: 10:52:41
[2022-06-14 11:15:57.247356] Train epoch: [40/65], batch: [2000/4147], loss: 14.33492, learning rate: 0.00000295, eta: 12:42:09
[2022-06-14 11:16:37.619724] Train epoch: [40/65], batch: [2100/4147], loss: 11.77368, learning rate: 0.00000295, eta: 11:51:07
[2022-06-14 11:17:19.745393] Train epoch: [40/65], batch: [2200/4147], loss: 152.03545, learning rate: 0.00000295, eta: 12:21:23
[2022-06-14 11:17:59.496921] Train epoch: [40/65], batch: [2300/4147], loss: 20.90699, learning rate: 0.00000295, eta: 11:37:02
[2022-06-14 11:18:40.916790] Train epoch: [40/65], batch: [2400/4147], loss: 14.98408, learning rate: 0.00000295, eta: 12:07:22
[2022-06-14 11:19:21.128528] Train epoch: [40/65], batch: [2500/4147], loss: 20.67203, learning rate: 0.00000295, eta: 11:45:32
[2022-06-14 11:20:00.801767] Train epoch: [40/65], batch: [2600/4147], loss: 43.89627, learning rate: 0.00000295, eta: 11:35:21
[2022-06-14 11:20:40.140230] Train epoch: [40/65], batch: [2700/4147], loss: 16.31599, learning rate: 0.00000295, eta: 11:28:33
[2022-06-14 11:21:21.869656] Train epoch: [40/65], batch: [2800/4147], loss: 93.47940, learning rate: 0.00000295, eta: 12:10:07
[2022-06-14 11:22:01.138975] Train epoch: [40/65], batch: [2900/4147], loss: 7.72215, learning rate: 0.00000295, eta: 11:25:26
[2022-06-14 11:22:41.929649] Train epoch: [40/65], batch: [3000/4147], loss: 16.76766, learning rate: 0.00000295, eta: 11:52:29
[2022-06-14 11:23:23.777514] Train epoch: [40/65], batch: [3100/4147], loss: 8.19308, learning rate: 0.00000295, eta: 12:10:10
[2022-06-14 11:24:04.068327] Train epoch: [40/65], batch: [3200/4147], loss: 16.69079, learning rate: 0.00000295, eta: 11:42:23
[2022-06-14 11:24:44.368898] Train epoch: [40/65], batch: [3300/4147], loss: 9.62776, learning rate: 0.00000295, eta: 11:41:48
[2022-06-14 11:25:22.430370] Train epoch: [40/65], batch: [3400/4147], loss: 16.32537, learning rate: 0.00000295, eta: 11:02:14
[2022-06-14 11:26:01.142928] Train epoch: [40/65], batch: [3500/4147], loss: 27.95178, learning rate: 0.00000295, eta: 11:12:45
[2022-06-14 11:26:42.269876] Train epoch: [40/65], batch: [3600/4147], loss: 18.67277, learning rate: 0.00000295, eta: 11:53:59
[2022-06-14 11:27:21.381375] Train epoch: [40/65], batch: [3700/4147], loss: 29.80147, learning rate: 0.00000295, eta: 11:18:26
[2022-06-14 11:27:59.772030] Train epoch: [40/65], batch: [3800/4147], loss: 17.30582, learning rate: 0.00000295, eta: 11:05:09
[2022-06-14 11:28:39.490305] Train epoch: [40/65], batch: [3900/4147], loss: 14.24930, learning rate: 0.00000295, eta: 11:27:34
[2022-06-14 11:29:19.160223] Train epoch: [40/65], batch: [4000/4147], loss: 25.34712, learning rate: 0.00000295, eta: 11:26:15
[2022-06-14 11:29:59.266580] Train epoch: [40/65], batch: [4100/4147], loss: 13.17377, learning rate: 0.00000295, eta: 11:32:58
yeyupiaoling commented 2 years ago

你的数据试多少,batch size有没有改变。 可以考虑使用我的预训练模型

a00147600 commented 2 years ago

你的数据试多少,batch size有没有改变。 可以考虑使用我的预训练模型

我的数据集总长度为129.62小时 batchsize等参数没有改变默认32,需要变大还是变小呢? 因为我的词库词汇表有专业词汇需要 所以没有使用预训练模型。

yeyupiaoling commented 2 years ago

可以使用我的预训练模型,以预训练的参数传进去,不影响的。

https://github.com/yeyupiaoling/PPASR/blob/81d22aa6f814949f6d2803d59a42a3ac143cf7e3/train.py#L33

a00147600 commented 2 years ago

可以使用我的预训练模型,以预训练的参数传进去,不影响的。

https://github.com/yeyupiaoling/PPASR/blob/81d22aa6f814949f6d2803d59a42a3ac143cf7e3/train.py#L33

谢谢 我已经成功加载了预训练模型 我之前训练到了46轮 学习率也降到了0.00000191 现在训练以此开始 我是不是把epoch44 45 46 lastmodel文件夹都删除后就会重新开始从第一轮学习率开始学习呢 [2022-06-14 14:29:44.853156] 成功加载预训练模型:models/deepspeech2/best_model [2022-06-14 14:29:45.744926] 成功恢复模型参数和优化方法参数:models/deepspeech2\last_model [2022-06-14 14:29:47.395013] Train epoch: [46/65], batch: [0/4147], loss: 22.07345, learning rate: 0.00000191, eta: 1 day, 13:59:35 [2022-06-14 14:30:30.840833] Train epoch: [46/65], batch: [100/4147], loss: 7.74502, learning rate: 0.00000191, eta: 9:59:47

yeyupiaoling commented 2 years ago

嗯嗯

a00147600 commented 2 years ago

嗯嗯

不好意思还有问题。用了预训练模型的话 create_data.py是否需要再执行一次?还是说直接使用预训练模型自带的dataset文件夹。我还需要大佬明确告诉我一下。。。

yeyupiaoling commented 2 years ago

用你的