OaDsis / DerainCycleGAN

MIT License
46 stars 4 forks source link

Question about loss function #15

Closed Lecxxx closed 1 year ago

Lecxxx commented 1 year ago

Thank you for your outstanding work, but now I have a few questions about the Loss function in the code. In line 180 of model. py, you annotated the following code: cha1, row1, col1 = self.mask_a[3].shape[1], self.mask_a[3].shape[2], self.mask_a[3].shape[3] mask_a_gt = torch.rand(opts.batch_size, cha1, row1, col1).cuda() loss_att_a = self.criterionL2(self.mask_a[5], mask_a_gt) * 10

This makes the attention loss only contain half of the description in your paper, which conflicts with the description in the paper. Is this a problem?

OaDsis commented 1 year ago

Yes, that is a mistake, you can cancel the annotation.

Lecxxx commented 1 year ago

@OaDsis Thanks for your reply. I cancel the annotation and modified the Loss function to the following form: loss_G = loss_G_GAN_A + loss_G_GAN_B + \ loss_G_L1_A + loss_G_L1_B + \ loss_perceptual + \ loss_cons + \ loss_att_b + loss_att_a

The results during the training process are in the following form: total_it: 1 (ep 0, it 1), lr 0.000100, disA 0.454318, disB 0.500568, ganA 0.973853, ganB 0.973228, recA 4.425363, recB 4.133862, percp 0.880033, cons_loss 2.424589, attB 0.001361, attA 3.451179, total 17.263468 total_it: 2 (ep 0, it 2), lr 0.000100, disA 0.416719, disB 0.463749, ganA 0.912895, ganB 0.900647, recA 4.648975, recB 3.706006, percp 0.746614, cons_loss 2.885847, attB 0.000231, attA 3.364104, total 17.165318 total_it: 3 (ep 0, it 3), lr 0.000100, disA 0.399364, disB 0.425298, ganA 0.838068, ganB 0.805129, recA 3.031458, recB 3.395942, percp 0.725828, cons_loss 1.318987, attB 0.000112, attA 3.283001, total 13.398525 total_it: 4 (ep 0, it 4), lr 0.000100, disA 0.349654, disB 0.377379, ganA 0.745493, ganB 0.688415, recA 4.654613, recB 2.549643, percp 0.787967, cons_loss 3.059796, attB 0.001204, attA 3.235075, total 15.722206 total_it: 5 (ep 0, it 5), lr 0.000100, disA 0.285982, disB 0.324985, ganA 0.632568, ganB 0.546403, recA 4.403074, recB 2.788066, percp 0.822683, cons_loss 2.784659, attB 0.004070, attA 3.141724, total 15.123247 total_it: 6 (ep 0, it 6), lr 0.000100, disA 0.256373, disB 0.261119, ganA 0.482774, ganB 0.388865, recA 3.252872, recB 3.830938, percp 1.007862, cons_loss 1.571942, attB 0.007443, attA 3.076305, total 13.619002 ... total_it: 393 (ep 1, it 193), lr 0.000100, disA 0.230090, disB 0.243723, ganA 0.313256, ganB 0.452222, recA 1.619573, recB 0.958254, percp 0.542632, cons_loss 0.319973, attB 0.498841, attA 1.572791, total 6.277541 total_it: 394 (ep 1, it 194), lr 0.000100, disA 0.183904, disB 0.337638, ganA 0.351476, ganB 0.603120, recA 1.062369, recB 0.992846, percp 0.322584, cons_loss 0.190934, attB 0.510730, attA 1.636341, total 5.670399 total_it: 395 (ep 1, it 195), lr 0.000100, disA 0.208328, disB 0.235580, ganA 0.536630, ganB 0.382105, recA 1.527501, recB 1.149172, percp 0.353436, cons_loss 0.281888, attB 0.513301, attA 1.520630, total 6.264664 total_it: 396 (ep 1, it 196), lr 0.000100, disA 0.241589, disB 0.231161, ganA 0.113991, ganB 0.529447, recA 1.181571, recB 1.609867, percp 0.367512, cons_loss 0.177205, attB 0.500499, attA 1.643606, total 6.123698 total_it: 397 (ep 1, it 197), lr 0.000100, disA 0.168638, disB 0.325995, ganA 0.252802, ganB 0.252872, recA 2.025960, recB 1.257719, percp 0.423531, cons_loss 0.238254, attB 0.529761, attA 1.571633, total 6.552532 total_it: 398 (ep 1, it 198), lr 0.000100, disA 0.184195, disB 0.348183, ganA 0.307939, ganB 0.246130, recA 1.996846, recB 1.250796, percp 0.378959, cons_loss 0.439748, attB 0.550344, attA 1.528978, total 6.699740 total_it: 399 (ep 1, it 199), lr 0.000100, disA 0.291252, disB 0.241256, ganA 0.271846, ganB 0.480793, recA 1.353552, recB 1.062438, percp 0.246189, cons_loss 0.137802, attB 0.517819, attA 1.617824, total 5.688264 Is the value of attA loss normal? Can you compare it with your value during training?

Lecxxx commented 1 year ago

@OaDsis Hello!The design of the Loss function in the notes does not correspond to the original text, as shown below: image

skyknights commented 1 year ago

@OaDsis Hello!The design of the Loss function in the notes does not correspond to the original text, as shown below: image

Hello, is the current project's loss function correct? Can it be consistent with the one in the paper?

OaDsis commented 1 year ago

Yes, you can!