YuanXue1993 / SegAN

SegAN: Semantic Segmentation with Adversarial Learning
MIT License
182 stars 58 forks source link

loss #6

Open hydxqing opened 5 years ago

hydxqing commented 5 years ago

Why do my output loss_G and loss_D are opposite to each other? In your code, loss_G and loss_D are just symbols different. And after this training is completed, the predictions are all nan. Why is this so? I really hope to hear from you.

douhe66 commented 5 years ago

Because the denominator may be 0 in the calculate of iou and dice, you can replace the np.mean() with np.nanmean().

douhe66 commented 5 years ago

I am also have a problem with the loss_G and loss_D, actually, I changed the symbol of the loss_D, but the results do not seem to differ. Is anyone could explain it?

YuanXue1993 commented 5 years ago

This shouldn't be happening, maybe you can try to train with the adversarial loss alone (i.e., w/o the dice loss, which was put there to help stabilize the adversarial training). In that case, changing the symbol should just make the whole training fail.