Open tearscoco opened 3 years ago
Would you be willing to share your setup, for training this on a custom dataset? I'm trying to go through this myself, only started, and would welcome some advice.
Nevermind, it was actually quite easy to do!
@tearscoco Did you somehow solve the issue? I've encountered the same problem (D loss during training remains 1)
any updates on this?
any updates on this?
Same behaivor here. I am not sure if it is a wrong behaivoir or maybe it is how it should work. When I let the discriminator learn the true/fake imgs but without giving its decision to the generative loss it ends up learning to clasifiy them very well. So the fact that the discriminator loss is always 1 means the generative model is actually doing good work fooling it
Great work!
I am working on my own dataset recently. During training, I found two odd things about the loss. I really appreciate the guidance of you if you've had the same problems before.
a. I fit in my own dataset, the whole process runs well except that the D loss during training remains 1! I followed the same procedure where Discriminator started after several epochs. Seems that D losses its ability to distinguish the real and fake. I decrease the number of pre-running epochs but ended in the same result.
b. I tried to exclude the D loss and keep the perceptual loss. The reconstructed results seem fine, except that within the complex-pattern area there exists some blocking effect noise. I wonder whether you guys ran into the same odd before.
All in all, I really think this piece of work is a big step toward better text-image generation.