wasidennis / AdaptSegNet

Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
847 stars 203 forks source link

How is the trend of loss changing? #22

Closed Sunting78 closed 4 years ago

Sunting78 commented 6 years ago

I train my data using this method. The loss_adv1 is increased. loss_seg1 and loss_D1 is decreased. In this situation, should I make LAMBDA_ADV_TARGET1 larger ?

Sunting78 commented 6 years ago

what is the right loss changing ?

wasidennis commented 6 years ago

Looks like the trend of your losses is correct. The adversarial loss will gradually go up, while the discriminator loss will decrease (within a range around 0.2~0.5 is reasonable). However, for obtaining the best performance on your own data, you should still adjust the weights of different losses.

Sunting78 commented 6 years ago

iter = 0/ 250000, loss_seg1 = 1.057 loss_adv1 = 0.693 loss_D1 = 1.386 loss_source = 0.69250 loss_target = 0.69357 exp = ./snapshots/NICE/ iter = 1/ 250000, loss_seg1 = 2.131 loss_adv1 = 0.757 loss_D1 = 1.387 loss_source = 0.75362 loss_target = 0.63319 exp = ./snapshots/NICE/ iter = 2/ 250000, loss_seg1 = 0.754 loss_adv1 = 0.723 loss_D1 = 1.383 loss_source = 0.71783 loss_target = 0.66470 exp = ./snapshots/NICE/ iter = 3/ 250000, loss_seg1 = 0.297 loss_adv1 = 0.690 loss_D1 = 1.379 loss_source = 0.68311 loss_target = 0.69629 exp = ./snapshots/NICE/ iter = 4/ 250000, loss_seg1 = 0.566 loss_adv1 = 0.698 loss_D1 = 1.377 loss_source = 0.68871 loss_target = 0.68825 exp = ./snapshots/NICE/ iter = 5/ 250000, loss_seg1 = 0.369 loss_adv1 = 0.701 loss_D1 = 1.372 loss_source = 0.68562 loss_target = 0.68594 exp = ./snapshots/NICE/ iter = 6/ 250000, loss_seg1 = 0.807 loss_adv1 = 0.738 loss_D1 = 1.372 loss_source = 0.72002 loss_target = 0.65161 exp = ./snapshots/NICE/ iter = 7/ 250000, loss_seg1 = 0.526 loss_adv1 = 0.733 loss_D1 = 1.374 loss_source = 0.71833 loss_target = 0.65603 exp = ./snapshots/NICE/ iter = 8/ 250000, loss_seg1 = 0.385 loss_adv1 = 0.680 loss_D1 = 1.376 loss_source = 0.66767 loss_target = 0.70855 exp = ./snapshots/NICE/ iter = 9/ 250000, loss_seg1 = 0.335 loss_adv1 = 0.677 loss_D1 = 1.367 loss_source = 0.65357 loss_target = 0.71302 exp = ./snapshots/NICE/ iter = 10/ 250000, loss_seg1 = 0.357 loss_adv1 = 0.673 loss_D1 = 1.358 loss_source = 0.63801 loss_target = 0.71970 exp = ./snapshots/NICE/ iter = 11/ 250000, loss_seg1 = 0.296 loss_adv1 = 0.677 loss_D1 = 1.352 loss_source = 0.63407 loss_target = 0.71746 exp = ./snapshots/NICE/ iter = 12/ 250000, loss_seg1 = 0.297 loss_adv1 = 0.707 loss_D1 = 1.341 loss_source = 0.64982 loss_target = 0.69114 exp = ./snapshots/NICE/ iter = 13/ 250000, loss_seg1 = 0.295 loss_adv1 = 0.744 loss_D1 = 1.330 loss_source = 0.66527 loss_target = 0.66442 exp = ./snapshots/NICE/ iter = 14/ 250000, loss_seg1 = 0.261 loss_adv1 = 0.723 loss_D1 = 1.320 loss_source = 0.62876 loss_target = 0.69109 exp = ./snapshots/NICE/ iter = 15/ 250000, loss_seg1 = 0.229 loss_adv1 = 0.730 loss_D1 = 1.311 loss_source = 0.61770 loss_target = 0.69349 exp = ./snapshots/NICE/ iter = 16/ 250000, loss_seg1 = 0.425 loss_adv1 = 0.822 loss_D1 = 1.306 loss_source = 0.67509 loss_target = 0.63116 exp = ./snapshots/NICE/ iter = 17/ 250000, loss_seg1 = 0.350 loss_adv1 = 0.745 loss_D1 = 1.297 loss_source = 0.59911 loss_target = 0.69796 exp = ./snapshots/NICE/ iter = 18/ 250000, loss_seg1 = 0.187 loss_adv1 = 0.820 loss_D1 = 1.288 loss_source = 0.64422 loss_target = 0.64425 exp = ./snapshots/NICE/ iter = 19/ 250000, loss_seg1 = 0.151 loss_adv1 = 0.881 loss_D1 = 1.287 loss_source = 0.68413 loss_target = 0.60291 exp = ./snapshots/NICE/ iter = 20/ 250000, loss_seg1 = 0.221 loss_adv1 = 0.738 loss_D1 = 1.277 loss_source = 0.55349 loss_target = 0.72365 exp = ./snapshots/NICE/ iter = 21/ 250000, loss_seg1 = 0.249 loss_adv1 = 0.769 loss_D1 = 1.266 loss_source = 0.57491 loss_target = 0.69090 exp = ./snapshots/NICE/ iter = 22/ 250000, loss_seg1 = 0.198 loss_adv1 = 0.879 loss_D1 = 1.258 loss_source = 0.66770 loss_target = 0.58993 exp = ./snapshots/NICE/ iter = 23/ 250000, loss_seg1 = 0.225 loss_adv1 = 0.874 loss_D1 = 1.248 loss_source = 0.66263 loss_target = 0.58552 exp = ./snapshots/NICE/ iter = 24/ 250000, loss_seg1 = 0.177 loss_adv1 = 0.777 loss_D1 = 1.238 loss_source = 0.57201 loss_target = 0.66571 exp = ./snapshots/NICE/ iter = 25/ 250000, loss_seg1 = 0.158 loss_adv1 = 0.770 loss_D1 = 1.229 loss_source = 0.55548 loss_target = 0.67382 exp = ./snapshots/NICE/ iter = 26/ 250000, loss_seg1 = 0.190 loss_adv1 = 0.936 loss_D1 = 1.223 loss_source = 0.68125 loss_target = 0.54149 exp = ./snapshots/NICE/ iter = 27/ 250000, loss_seg1 = 0.136 loss_adv1 = 0.825 loss_D1 = 1.208 loss_source = 0.57759 loss_target = 0.63062 exp = ./snapshots/NICE/ iter = 28/ 250000, loss_seg1 = 0.191 loss_adv1 = 0.791 loss_D1 = 1.207 loss_source = 0.54012 loss_target = 0.66722 exp = ./snapshots/NICE/ iter = 29/ 250000, loss_seg1 = 0.150 loss_adv1 = 0.970 loss_D1 = 1.205 loss_source = 0.67490 loss_target = 0.52976 exp = ./snapshots/NICE/ iter = 30/ 250000, loss_seg1 = 0.179 loss_adv1 = 0.927 loss_D1 = 1.194 loss_source = 0.62450 loss_target = 0.56983 exp = ./snapshots/NICE/ iter = 31/ 250000, loss_seg1 = 0.161 loss_adv1 = 0.808 loss_D1 = 1.197 loss_source = 0.50432 loss_target = 0.69273 exp = ./snapshots/NICE/ iter = 32/ 250000, loss_seg1 = 0.116 loss_adv1 = 1.030 loss_D1 = 1.185 loss_source = 0.65758 loss_target = 0.52791 exp = ./snapshots/NICE/ iter = 33/ 250000, loss_seg1 = 0.131 loss_adv1 = 0.963 loss_D1 = 1.172 loss_source = 0.57392 loss_target = 0.59828 exp = ./snapshots/NICE/ iter = 34/ 250000, loss_seg1 = 0.212 loss_adv1 = 0.920 loss_D1 = 1.172 loss_source = 0.51166 loss_target = 0.66044 exp = ./snapshots/NICE/ iter = 35/ 250000, loss_seg1 = 0.162 loss_adv1 = 1.181 loss_D1 = 1.178 loss_source = 0.70259 loss_target = 0.47562 exp = ./snapshots/NICE/ iter = 36/ 250000, loss_seg1 = 0.140 loss_adv1 = 0.905 loss_D1 = 1.158 loss_source = 0.45832 loss_target = 0.69990 exp = ./snapshots/NICE/ iter = 37/ 250000, loss_seg1 = 0.156 loss_adv1 = 0.970 loss_D1 = 1.148 loss_source = 0.49124 loss_target = 0.65669 exp = ./snapshots/NICE/ iter = 38/ 250000, loss_seg1 = 0.170 loss_adv1 = 1.251 loss_D1 = 1.160 loss_source = 0.68754 loss_target = 0.47273 exp = ./snapshots/NICE/ iter = 39/ 250000, loss_seg1 = 0.138 loss_adv1 = 1.082 loss_D1 = 1.151 loss_source = 0.53623 loss_target = 0.61496 exp = ./snapshots/NICE/ iter = 40/ 250000, loss_seg1 = 0.183 loss_adv1 = 1.065 loss_D1 = 1.162 loss_source = 0.51305 loss_target = 0.64927 exp = ./snapshots/NICE/ iter = 41/ 250000, loss_seg1 = 0.118 loss_adv1 = 1.417 loss_D1 = 1.181 loss_source = 0.76831 loss_target = 0.41260 exp = ./snapshots/NICE/ iter = 42/ 250000, loss_seg1 = 0.204 loss_adv1 = 1.044 loss_D1 = 1.166 loss_source = 0.49003 loss_target = 0.67573 exp = ./snapshots/NICE/ iter = 43/ 250000, loss_seg1 = 0.124 loss_adv1 = 1.147 loss_D1 = 1.117 loss_source = 0.54561 loss_target = 0.57114 exp = ./snapshots/NICE/ iter = 44/ 250000, loss_seg1 = 0.154 loss_adv1 = 1.283 loss_D1 = 1.112 loss_source = 0.64131 loss_target = 0.47031 exp = ./snapshots/NICE/ iter = 45/ 250000, loss_seg1 = 0.115 loss_adv1 = 1.100 loss_D1 = 1.076 loss_source = 0.48435 loss_target = 0.59213 exp = ./snapshots/NICE/ iter = 46/ 250000, loss_seg1 = 0.127 loss_adv1 = 1.050 loss_D1 = 1.073 loss_source = 0.44074 loss_target = 0.63240

Sunting78 commented 6 years ago

Thanks for your reply .This is my train log. Please teach me the trend of loss is right? loss_seg1 and loss_D1 is decreased. loss_adv1 is go up.

wasidennis commented 5 years ago

@Sunting78 sorry for the late reply and hope you have figured it out! It would be better to train the model much longer for a few thousand iterations and checked the loss trend.