WenmuZhou / PSENet.pytorch

A pytorch re-implementation of PSENet: Shape Robust Text Detection with Progressive Scale Expansion Network
GNU General Public License v3.0
463 stars 138 forks source link

有关loss的问题 #12

Closed xiiiiiiii closed 5 years ago

xiiiiiiii commented 5 years ago

没错,又是我,我又来了,千万拜托请回答一下这个问题吧 我拿自己的训练集重新训练,其实也就是icdar2017中文检测数据集重头训练,训练出的loss的结果是这样的,感觉不收敛,这个结果是对的吗? 2019-06-16 14:05:04 INFO train.py: [0/600], [0/741], step: 0, 2.282 samples/sec, batch_loss: 0.6487, batch_loss_c: 0.5890, batch_loss_s: 0.7880, time:35.0638, lr:0.0001 2019-06-16 14:05:25 INFO train.py: [0/600], [10/741], step: 10, 3.978 samples/sec, batch_loss: 0.6959, batch_loss_c: 0.6517, batch_loss_s: 0.7990, time:20.1121, lr:0.0001 2019-06-16 14:05:34 INFO train.py: [0/600], [20/741], step: 20, 8.281 samples/sec, batch_loss: 0.6771, batch_loss_c: 0.6669, batch_loss_s: 0.7009, time:9.6603, lr:0.0001 2019-06-16 14:05:44 INFO train.py: [0/600], [30/741], step: 30, 8.115 samples/sec, batch_loss: 0.4114, batch_loss_c: 0.3713, batch_loss_s: 0.5050, time:9.8582, lr:0.0001 2019-06-16 14:05:54 INFO train.py: [0/600], [40/741], step: 40, 7.940 samples/sec, batch_loss: 0.4913, batch_loss_c: 0.4531, batch_loss_s: 0.5804, time:10.0759, lr:0.0001 2019-06-16 14:06:04 INFO train.py: [0/600], [50/741], step: 50, 8.016 samples/sec, batch_loss: 0.5735, batch_loss_c: 0.5390, batch_loss_s: 0.6541, time:9.9800, lr:0.0001 2019-06-16 14:06:24 INFO train.py: [0/600], [60/741], step: 60, 3.988 samples/sec, batch_loss: 0.5176, batch_loss_c: 0.4866, batch_loss_s: 0.5900, time:20.0616, lr:0.0001 2019-06-16 14:06:34 INFO train.py: [0/600], [70/741], step: 70, 8.212 samples/sec, batch_loss: 0.5106, batch_loss_c: 0.4947, batch_loss_s: 0.5476, time:9.7414, lr:0.0001 2019-06-16 14:06:44 INFO train.py: [0/600], [80/741], step: 80, 8.026 samples/sec, batch_loss: 0.4583, batch_loss_c: 0.4119, batch_loss_s: 0.5665, time:9.9672, lr:0.0001 2019-06-16 14:06:54 INFO train.py: [0/600], [90/741], step: 90, 7.636 samples/sec, batch_loss: 0.5777, batch_loss_c: 0.5524, batch_loss_s: 0.6367, time:10.4768, lr:0.0001 2019-06-16 14:07:05 INFO train.py: [0/600], [100/741], step: 100, 7.858 samples/sec, batch_loss: 0.5616, batch_loss_c: 0.5208, batch_loss_s: 0.6569, time:10.1801, lr:0.0001 2019-06-16 14:07:23 INFO train.py: [0/600], [110/741], step: 110, 4.386 samples/sec, batch_loss: 0.6482, batch_loss_c: 0.6230, batch_loss_s: 0.7070, time:18.2402, lr:0.0001 2019-06-16 14:07:33 INFO train.py: [0/600], [120/741], step: 120, 7.834 samples/sec, batch_loss: 0.6373, batch_loss_c: 0.6169, batch_loss_s: 0.6850, time:10.2115, lr:0.0001 2019-06-16 14:07:43 INFO train.py: [0/600], [130/741], step: 130, 8.250 samples/sec, batch_loss: 0.5493, batch_loss_c: 0.4940, batch_loss_s: 0.6783, time:9.6966, lr:0.0001 2019-06-16 14:07:52 INFO train.py: [0/600], [140/741], step: 140, 8.474 samples/sec, batch_loss: 0.7837, batch_loss_c: 0.7423, batch_loss_s: 0.8801, time:9.4409, lr:0.0001 2019-06-16 14:08:02 INFO train.py: [0/600], [150/741], step: 150, 8.075 samples/sec, batch_loss: 0.7084, batch_loss_c: 0.6462, batch_loss_s: 0.8534, time:9.9073, lr:0.0001 2019-06-16 14:08:21 INFO train.py: [0/600], [160/741], step: 160, 4.259 samples/sec, batch_loss: 0.6689, batch_loss_c: 0.6166, batch_loss_s: 0.7911, time:18.7838, lr:0.0001 2019-06-16 14:08:30 INFO train.py: [0/600], [170/741], step: 170, 8.376 samples/sec, batch_loss: 0.4608, batch_loss_c: 0.4029, batch_loss_s: 0.5959, time:9.5513, lr:0.0001 2019-06-16 14:08:40 INFO train.py: [0/600], [180/741], step: 180, 8.049 samples/sec, batch_loss: 0.5276, batch_loss_c: 0.4842, batch_loss_s: 0.6289, time:9.9389, lr:0.0001 2019-06-16 14:08:50 INFO train.py: [0/600], [190/741], step: 190, 8.038 samples/sec, batch_loss: 0.8508, batch_loss_c: 0.8100, batch_loss_s: 0.9460, time:9.9531, lr:0.0001 2019-06-16 14:09:05 INFO train.py: [0/600], [200/741], step: 200, 5.566 samples/sec, batch_loss: 0.5191, batch_loss_c: 0.4413, batch_loss_s: 0.7007, time:14.3731, lr:0.0001 2019-06-16 14:09:22 INFO train.py: [0/600], [210/741], step: 210, 4.579 samples/sec, batch_loss: 0.6712, batch_loss_c: 0.6369, batch_loss_s: 0.7514, time:17.4727, lr:0.0001 2019-06-16 14:09:32 INFO train.py: [0/600], [220/741], step: 220, 8.443 samples/sec, batch_loss: 0.5468, batch_loss_c: 0.5085, batch_loss_s: 0.6361, time:9.4758, lr:0.0001 2019-06-16 14:09:43 INFO train.py: [0/600], [230/741], step: 230, 7.284 samples/sec, batch_loss: 0.6120, batch_loss_c: 0.5617, batch_loss_s: 0.7295, time:10.9823, lr:0.0001 2019-06-16 14:09:52 INFO train.py: [0/600], [240/741], step: 240, 8.349 samples/sec, batch_loss: 0.6051, batch_loss_c: 0.5453, batch_loss_s: 0.7446, time:9.5820, lr:0.0001 2019-06-16 14:10:02 INFO train.py: [0/600], [250/741], step: 250, 7.841 samples/sec, batch_loss: 0.7247, batch_loss_c: 0.7025, batch_loss_s: 0.7766, time:10.2027, lr:0.0001 2019-06-16 14:10:21 INFO train.py: [0/600], [260/741], step: 260, 4.222 samples/sec, batch_loss: 0.5713, batch_loss_c: 0.5216, batch_loss_s: 0.6870, time:18.9468, lr:0.0001 2019-06-16 14:10:31 INFO train.py: [0/600], [270/741], step: 270, 8.194 samples/sec, batch_loss: 0.4275, batch_loss_c: 0.3918, batch_loss_s: 0.5106, time:9.7634, lr:0.0001 2019-06-16 14:10:41 INFO train.py: [0/600], [280/741], step: 280, 8.335 samples/sec, batch_loss: 0.7334, batch_loss_c: 0.7003, batch_loss_s: 0.8107, time:9.5977, lr:0.0001 2019-06-16 14:10:51 INFO train.py: [0/600], [290/741], step: 290, 7.972 samples/sec, batch_loss: 0.4526, batch_loss_c: 0.3952, batch_loss_s: 0.5863, time:10.0354, lr:0.0001 2019-06-16 14:11:00 INFO train.py: [0/600], [300/741], step: 300, 8.297 samples/sec, batch_loss: 0.6505, batch_loss_c: 0.6269, batch_loss_s: 0.7058, time:9.6419, lr:0.0001 2019-06-16 14:11:18 INFO train.py: [0/600], [310/741], step: 310, 4.493 samples/sec, batch_loss: 0.5410, batch_loss_c: 0.5006, batch_loss_s: 0.6354, time:17.8042, lr:0.0001 2019-06-16 14:11:28 INFO train.py: [0/600], [320/741], step: 320, 8.510 samples/sec, batch_loss: 0.5535, batch_loss_c: 0.4923, batch_loss_s: 0.6962, time:9.4011, lr:0.0001 2019-06-16 14:11:37 INFO train.py: [0/600], [330/741], step: 330, 8.138 samples/sec, batch_loss: 0.5340, batch_loss_c: 0.4953, batch_loss_s: 0.6243, time:9.8307, lr:0.0001 2019-06-16 14:11:47 INFO train.py: [0/600], [340/741], step: 340, 8.378 samples/sec, batch_loss: 0.4422, batch_loss_c: 0.3610, batch_loss_s: 0.6317, time:9.5487, lr:0.0001 2019-06-16 14:11:56 INFO train.py: [0/600], [350/741], step: 350, 8.571 samples/sec, batch_loss: 0.6063, batch_loss_c: 0.5282, batch_loss_s: 0.7885, time:9.3337, lr:0.0001 2019-06-16 14:12:15 INFO train.py: [0/600], [360/741], step: 360, 4.313 samples/sec, batch_loss: 0.6468, batch_loss_c: 0.6375, batch_loss_s: 0.6685, time:18.5492, lr:0.0001 2019-06-16 14:12:24 INFO train.py: [0/600], [370/741], step: 370, 8.555 samples/sec, batch_loss: 0.4454, batch_loss_c: 0.4177, batch_loss_s: 0.5101, time:9.3514, lr:0.0001 2019-06-16 14:12:35 INFO train.py: [0/600], [380/741], step: 380, 7.531 samples/sec, batch_loss: 0.6706, batch_loss_c: 0.6109, batch_loss_s: 0.8099, time:10.6232, lr:0.0001 2019-06-16 14:12:45 INFO train.py: [0/600], [390/741], step: 390, 7.875 samples/sec, batch_loss: 0.5807, batch_loss_c: 0.5188, batch_loss_s: 0.7251, time:10.1590, lr:0.0001 2019-06-16 14:12:58 INFO train.py: [0/600], [400/741], step: 400, 6.320 samples/sec, batch_loss: 0.5686, batch_loss_c: 0.5284, batch_loss_s: 0.6623, time:12.6579, lr:0.0001

WenmuZhou commented 5 years ago

这个loss 你贴截图吧,这么不好观察

xiiiiiiii commented 5 years ago

这个loss 你贴截图吧,这么不好观察

我的训练数据是6000,测试数据是2000 就是现在训练了2个epoch,虽然只训练了两个epoch但是我好焦急,他的loss显示是这样的: image image

这个是正常的么,话说如果这个是每一个batch计算一次,如果是一个epoch记录一次是不是就能好很多,问一下博主是在8个epoch之后就能得到好的结果吗?博主的loss大概多久之后就稳定了惹?

WenmuZhou commented 5 years ago

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一很坚定的下降着

xiiiiiiii commented 5 years ago

好的,谢谢

---原始邮件--- 发件人: "zhoujun"notifications@github.com 发送时间: 2019年6月17日(星期一) 上午8:52 收件人: "WenmuZhou/PSENet.pytorch"PSENet.pytorch@noreply.github.com; 抄送: "guopeijun"494207346@qq.com;"Author"author@noreply.github.com; 主题: Re: [WenmuZhou/PSENet.pytorch] 有关loss的问题 (#12)

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一直和坚定的下降着

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

xiiiiiiii commented 5 years ago

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一很坚定的下降着

又是我,我又来了,这次想问一下,跑通了之后,test也跑出来了之后,但是就卡住不动了,这个是正常的吗?是需要等待吗?还是需要修改什么参数? image 就是,在第一次测试完之后,他就卡住了,已经卡住一个小时了,这个是正常的么,等待就好了么? 按理说应该往下去接着训练才对是不是? num_workers我设为了3 train_batch_size是14,多卡训练,用了7个gpu。

xiiiiiiii commented 5 years ago

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一很坚定的下降着

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一很坚定的下降着

又是我,我又来了,这次想问一下,跑通了之后,test也跑出来了之后,但是就卡住不动了,这个是正常的吗?是需要等待吗?还是需要修改什么参数? image 就是,在第一次测试完之后,他就卡住了,已经卡住一个小时了,这个是正常的么,等待就好了么? 按理说应该往下去接着训练才对是不是? num_workers我设为了3 train_batch_size是14,多卡训练,用了7个gpu。

啊啊啊啊,我num_worker变为12,batch_size设为4还是会卡,第一次测试完就不动了,这是为啥呢?我好难受 但是我单独运行train_epoch和eval都是可以循环的,这个是我的num_workers设置大了还是真的要等很久?

xiiiiiiii commented 5 years ago

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一很坚定的下降着

我有来了,我把num_workers设为了0就好使了,所以造成这个问题的原因是因为读数据dataloder多进程造成的假死吗?但是我其他程序运行的时候num_workers是4也不会有训练卡死的现象,还有就是num_workers设为了0读数据也太慢了吧(哭

WenmuZhou commented 5 years ago

你把这个https://github.com/WenmuZhou/PSENet.pytorch/blob/3df90a8d737e2d96432df35589d8822448c82989/dataset/augment.py#L244 改成50试试

xiiiiiiii commented 5 years ago

你把这个

https://github.com/WenmuZhou/PSENet.pytorch/blob/3df90a8d737e2d96432df35589d8822448c82989/dataset/augment.py#L244

改成50试试

找到问题了,发现不是图片数量的问题,我调试的时候,发现程序卡在了这一句 https://github.com/WenmuZhou/PSENet.pytorch/blob/3df90a8d737e2d96432df35589d8822448c82989/dataset/data_utils.py#L106 好象是opencv和python多进程的问题,然后我升级opencv到4.1,就好了,参考 https://github.com/pytorch/pytorch/issues/1355#issuecomment-341291968

xiiiiiiii commented 5 years ago

你把这个

https://github.com/WenmuZhou/PSENet.pytorch/blob/3df90a8d737e2d96432df35589d8822448c82989/dataset/augment.py#L244

改成50试试

我 训icd15的时候是10个epoch才能看到结果,你这个虽然在震荡,但是一很坚定的下降着

问一下博主大概在训练多少个epoch后达到了readme中的85%的准确率,我训练了30个epoch准确率一直在50%左右乎上乎下,要训练600个epoch才能有好的结果吗?我总怕过拟合 我的准确率的图是这样的: image 每个epoch的loss是这样的 image 想问问,这个准确率对么,是不是要跑完600个epoch才能有好的结果?

WenmuZhou commented 5 years ago

readme最高也才82呀,到80多需要300个epoch左右