Open mertmerci opened 3 years ago
Hi, maybe your lr is too large, try 0.001 instead.
@mertmerci Hi, I have the same situation as you. And I tried lr=0.001 or lr=0.0003, it still doesn't work. Have you solved this problem? what't your best training loss you can get now?
I have not solved the problem yet. The best loss I can get is around 0.7.
Epoch: [ 5/500] Iter [ 345/ 647] || Time: 1166.8377 sec || lr: 0.00990037 || Loss: 0.6414 Epoch: [ 5/500] Iter [ 355/ 647] || Time: 1170.0967 sec || lr: 0.00990010 || Loss: 0.4728 Epoch: [ 5/500] Iter [ 365/ 647] || Time: 1173.3334 sec || lr: 0.00989982 || Loss: 0.6681 Epoch: [ 5/500] Iter [ 375/ 647] || Time: 1176.5961 sec || lr: 0.00989954 || Loss: 0.5141 Epoch: [ 5/500] Iter [ 385/ 647] || Time: 1179.8637 sec || lr: 0.00989926 || Loss: 0.4688 Epoch: [ 5/500] Iter [ 395/ 647] || Time: 1183.0879 sec || lr: 0.00989898 || Loss: 0.6347 Epoch: [ 5/500] Iter [ 405/ 647] || Time: 1186.2374 sec || lr: 0.00989870 || Loss: 0.4698 Epoch: [ 5/500] Iter [ 415/ 647] || Time: 1189.4530 sec || lr: 0.00989842 || Loss: 0.6898 Epoch: [ 5/500] Iter [ 425/ 647] || Time: 1192.6652 sec || lr: 0.00989815 || Loss: 0.5331 Epoch: [ 5/500] Iter [ 435/ 647] || Time: 1195.9574 sec || lr: 0.00989787 || Loss: 0.7055 Epoch: [ 5/500] Iter [ 445/ 647] || Time: 1199.3273 sec || lr: 0.00989759 || Loss: 0.5093 Epoch: [ 5/500] Iter [ 455/ 647] || Time: 1202.6462 sec || lr: 0.00989731 || Loss: 0.4337 Epoch: [ 5/500] Iter [ 465/ 647] || Time: 1205.8542 sec || lr: 0.00989703 || Loss: 0.4287 Epoch: [ 5/500] Iter [ 475/ 647] || Time: 1209.0492 sec || lr: 0.00989675 || Loss: 0.6152 Epoch: [ 5/500] Iter [ 485/ 647] || Time: 1212.3493 sec || lr: 0.00989647 || Loss: 1.2922 Epoch: [ 5/500] Iter [ 495/ 647] || Time: 1215.4831 sec || lr: 0.00989620 || Loss: 0.4719 Epoch: [ 5/500] Iter [ 505/ 647] || Time: 1218.5399 sec || lr: 0.00989592 || Loss: 0.9275 Epoch: [ 5/500] Iter [ 515/ 647] || Time: 1221.7445 sec || lr: 0.00989564 || Loss: 0.5310 Epoch: [ 5/500] Iter [ 525/ 647] || Time: 1224.8567 sec || lr: 0.00989536 || Loss: 0.7151 Epoch: [ 5/500] Iter [ 535/ 647] || Time: 1228.1229 sec || lr: 0.00989508 || Loss: 0.4688 Epoch: [ 5/500] Iter [ 545/ 647] || Time: 1231.3132 sec || lr: 0.00989480 || Loss: 0.5770 Epoch: [ 5/500] Iter [ 555/ 647] || Time: 1234.6185 sec || lr: 0.00989453 || Loss: 0.6590 Epoch: [ 5/500] Iter [ 565/ 647] || Time: 1238.0210 sec || lr: 0.00989425 || Loss: 0.3722
Epoch: [12/500] Iter [ 166/ 647] || Time: 2583.1239 sec || lr: 0.00977914 || Loss: 0.5476 Epoch: [12/500] Iter [ 176/ 647] || Time: 2586.6632 sec || lr: 0.00977886 || Loss: 0.7562 Epoch: [12/500] Iter [ 186/ 647] || Time: 2590.0186 sec || lr: 0.00977858 || Loss: 0.5186 Epoch: [12/500] Iter [ 196/ 647] || Time: 2593.0659 sec || lr: 0.00977830 || Loss: 1.0565 Epoch: [12/500] Iter [ 206/ 647] || Time: 2596.3114 sec || lr: 0.00977802 || Loss: 0.6302 Epoch: [12/500] Iter [ 216/ 647] || Time: 2599.6234 sec || lr: 0.00977774 || Loss: 0.5147 Epoch: [12/500] Iter [ 226/ 647] || Time: 2602.8056 sec || lr: 0.00977746 || Loss: 0.8301 Epoch: [12/500] Iter [ 236/ 647] || Time: 2605.9763 sec || lr: 0.00977718 || Loss: 0.5063 Epoch: [12/500] Iter [ 246/ 647] || Time: 2609.2808 sec || lr: 0.00977690 || Loss: 0.6625 Epoch: [12/500] Iter [ 256/ 647] || Time: 2612.6594 sec || lr: 0.00977663 || Loss: 0.6614 Epoch: [12/500] Iter [ 266/ 647] || Time: 2615.8317 sec || lr: 0.00977635 || Loss: 1.1284 Epoch: [12/500] Iter [ 276/ 647] || Time: 2619.1268 sec || lr: 0.00977607 || Loss: 0.6293 Epoch: [12/500] Iter [ 286/ 647] || Time: 2622.3897 sec || lr: 0.00977579 || Loss: 0.4887 Epoch: [12/500] Iter [ 296/ 647] || Time: 2625.7109 sec || lr: 0.00977551 || Loss: 0.6670 Epoch: [12/500] Iter [ 306/ 647] || Time: 2629.1389 sec || lr: 0.00977523 || Loss: 0.6722
Train my own data set, there are 2 categories (including background)
Epoch: [12/500] Iter [ 166/ 647] || Time: 2583.1239 sec || lr: 0.00977914 || Loss: 0.5476 Epoch: [12/500] Iter [ 176/ 647] || Time: 2586.6632 sec || lr: 0.00977886 || Loss: 0.7562 Epoch: [12/500] Iter [ 186/ 647] || Time: 2590.0186 sec || lr: 0.00977858 || Loss: 0.5186 Epoch: [12/500] Iter [ 196/ 647] || Time: 2593.0659 sec || lr: 0.00977830 || Loss: 1.0565 Epoch: [12/500] Iter [ 206/ 647] || Time: 2596.3114 sec || lr: 0.00977802 || Loss: 0.6302 Epoch: [12/500] Iter [ 216/ 647] || Time: 2599.6234 sec || lr: 0.00977774 || Loss: 0.5147 Epoch: [12/500] Iter [ 226/ 647] || Time: 2602.8056 sec || lr: 0.00977746 || Loss: 0.8301 Epoch: [12/500] Iter [ 236/ 647] || Time: 2605.9763 sec || lr: 0.00977718 || Loss: 0.5063 Epoch: [12/500] Iter [ 246/ 647] || Time: 2609.2808 sec || lr: 0.00977690 || Loss: 0.6625 Epoch: [12/500] Iter [ 256/ 647] || Time: 2612.6594 sec || lr: 0.00977663 || Loss: 0.6614 Epoch: [12/500] Iter [ 266/ 647] || Time: 2615.8317 sec || lr: 0.00977635 || Loss: 1.1284 Epoch: [12/500] Iter [ 276/ 647] || Time: 2619.1268 sec || lr: 0.00977607 || Loss: 0.6293 Epoch: [12/500] Iter [ 286/ 647] || Time: 2622.3897 sec || lr: 0.00977579 || Loss: 0.4887 Epoch: [12/500] Iter [ 296/ 647] || Time: 2625.7109 sec || lr: 0.00977551 || Loss: 0.6670 Epoch: [12/500] Iter [ 306/ 647] || Time: 2629.1389 sec || lr: 0.00977523 || Loss: 0.6722
Train my own data set, there are 2 categories (including background)
Please confirm that the label is correct, maybe we can talk through the WeChat more smoothly.
Epoch: [12/500] Iter [ 166/ 647] || Time: 2583.1239 sec || lr: 0.00977914 || Loss: 0.5476 Epoch: [12/500] Iter [ 176/ 647] || Time: 2586.6632 sec || lr: 0.00977886 || Loss: 0.7562 Epoch: [12/500] Iter [ 186/ 647] || Time: 2590.0186 sec || lr: 0.00977858 || Loss: 0.5186 Epoch: [12/500] Iter [ 196/ 647] || Time: 2593.0659 sec || lr: 0.00977830 || Loss: 1.0565 Epoch: [12/500] Iter [ 206/ 647] || Time: 2596.3114 sec || lr: 0.00977802 || Loss: 0.6302 Epoch: [12/500] Iter [ 216/ 647] || Time: 2599.6234 sec || lr: 0.00977774 || Loss: 0.5147 Epoch: [12/500] Iter [ 226/ 647] || Time: 2602.8056 sec || lr: 0.00977746 || Loss: 0.8301 Epoch: [12/500] Iter [ 236/ 647] || Time: 2605.9763 sec || lr: 0.00977718 || Loss: 0.5063 Epoch: [12/500] Iter [ 246/ 647] || Time: 2609.2808 sec || lr: 0.00977690 || Loss: 0.6625 Epoch: [12/500] Iter [ 256/ 647] || Time: 2612.6594 sec || lr: 0.00977663 || Loss: 0.6614 Epoch: [12/500] Iter [ 266/ 647] || Time: 2615.8317 sec || lr: 0.00977635 || Loss: 1.1284 Epoch: [12/500] Iter [ 276/ 647] || Time: 2619.1268 sec || lr: 0.00977607 || Loss: 0.6293 Epoch: [12/500] Iter [ 286/ 647] || Time: 2622.3897 sec || lr: 0.00977579 || Loss: 0.4887 Epoch: [12/500] Iter [ 296/ 647] || Time: 2625.7109 sec || lr: 0.00977551 || Loss: 0.6670 Epoch: [12/500] Iter [ 306/ 647] || Time: 2629.1389 sec || lr: 0.00977523 || Loss: 0.6722 Train my own data set, there are 2 categories (including background)
Please confirm that the label is correct, maybe we can talk through the WeChat more smoothly.
Thank you very much , and this is my wechat phone number,13716670388
Having the same problem. What was the solution?
I have the same problem. Have you solve this problem? Could you share us with your learning rate and batch_size? Thank you!
I'm running the training file using the Cityscapes datasets mentioned, however the loss does not decrease accordingly. In the beginningg it was around 2, in 90th epoch it's struggling around 1. All my hyper parameters are at default; the learning rate is 1e-2, momentum is 0.9 and weight decay is 1e-4.