MinfengZhu / DM-GAN

MIT License
187 stars 64 forks source link

ACM module #3

Closed Blue-Clean closed 4 years ago

Blue-Clean commented 4 years ago

Thanks for your great works in T2I. After reading your paper and reproceduring your code, but i find something strange. I drew the loss of g_loss, d_loss, and accuracy of discriminator by visdom. The loss of D isn't decreasing continuously. Must I stop the training after 800 epoch? Can I stop early by obeserving the loss and the accuracy curve?

MinfengZhu commented 4 years ago

The GAN loss measures the balance between the discriminator and the generator. The stable loss only suggests that your GAN model converges to the game's Nash equilibrium state. You can find an example here. However, you should measure the image quality by IS and FID after each epoch, if you want to stop training early. We evaluate all saved checkpoints after training and select the best checkpoint in practice.

Blue-Clean commented 4 years ago

thanks you very much. i WANNA know your released best model is got from which epoch(generally speaking). And can you realease your code for R_precision to me?

MinfengZhu commented 4 years ago

In our experiments, the last 100 epochs may achieve the best performance. I will update the code for R_precision as soon as possible.

Blue-Clean commented 4 years ago

Hi! I am coming again! I wanna know how to evaluate our model in R-precision.I didn't find any code in github. so, as soon as possible! soon meand how soon:). HAHA, sorry to bother you!

MinfengZhu commented 4 years ago

I have updated the code for R-precision.