YuanXue1993 / SegAN

SegAN: Semantic Segmentation with Adversarial Learning
MIT License
182 stars 58 forks source link

The loss of S and C optimization is different #11

Open DrewdropLife opened 5 years ago

DrewdropLife commented 5 years ago

The paper mentions that S and C optimization have the same loss, but in the code, dice loss is added to optimize S, and after I try to get rid of it, the effect will become very poor, why?

YuanXue1993 commented 5 years ago

The dice loss is used to help stabilize the adversarial training. You can try to "warm-up" the network with regular training (dice only) for several epochs and then remove the dice to use pure adversarial training, and you should be able to see the adversarial training works properly. Or you can try to use adversarial loss only from scratch, but in this case, you may have to experiment with different learning rates or even different architecture as the training can be unstable.