Closed galeone closed 6 years ago
Hi,
Thanks for your interest in our research,
Training GANs is highly challenging, and I would say that it is even more the case for BiGANs/AliGANs. For BiGANs/AliGANs, there are always small differences of implementations you can find online, and often those variations are fine. Among those variations we find people training G and E at the same time or separately. Like I explained in an issue I just closed this week, one can easily find an interpretation of the loss functions of the encoder only and the loss function of the generator (or decoder) only. We've been using those separated loss functions from the very start as one of the possible variations and also to better separate the encoder architecture building and the generator architecture building.
I would also recommend to write to the authors of Ali/BiGAN to see whether they have a better clue of this. I guess my response was only intuitive but not really quantitative.
Thanks again, Houssam
Thank you very much!
hello,have you run the code successfully?
Hi, I'm reading your code and looking at the BiGAN implementation and I was wondering: why the loss function for E is separate from the loss function of G?
In the standard adversarial training, we train D as in a min-max game, where the discriminator should judge between the real and fake samples and G uses the judgment of D on the generated samples to update its parameters.
Hence, can't the min-max game be played also from G and E? Both use the information generated by D and E must learn the opposite task of G. So, why the loss can't be something like
I'm trying training using this loss function and the train is unstable, sometimes the generated images look fine, but after some epoch the model collapses and generates black images. After a few steps, the output still comes back to something realistic...
Probably there's something obvious that I'm missing. Thank you for your help!