Closed gaoming96 closed 5 years ago
Hi @gaoming96,
I remember that time I was training, I could not train the network (probably due to that loss aux_errD_fake
or maybe another bug I don't remember). Then I removed it out of the loss.
The reason why I didn't put aux_errD_fake
in the loss simply because I didn't want D to learn from fake output for labelling task.
For example, during training, the output image from G is still not in form yet, e.g. not yet a table, or room, or bridge... (it does not look like a real image) and we tell D this is a table, room or bridge. So this is wrong information for D to learn.
You can try to put it in the loss and let us know the result :) Hung.
Hi Hung,
Thank you for your reply. I have tried both of them in my own dataset and the loss without aux_errD_fake
when training D gives better visualization. I think your answer makes sense: adding this E(ln(P(label | Xfake))) may harm D if G is not trained so well.
I agree, if G is not trained well, it may harm D a lot. If it is, probably it will give better result.
Hi Hung, I wonder why you comment line 247,248 in congan_train.py? I suppose in the paper, both of the two losses
aux_errD_real
andaux_errD_fake
is needed.Is there any reason to comment it? Thanks in advance.