simontomaskarlsson / GAN-MRI

Code repository for Frontiers article 'Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT'
GNU General Public License v3.0
227 stars 77 forks source link

Loss D goes to zero #11

Closed John1231983 closed 5 years ago

John1231983 commented 5 years ago

During training cyclegan, I found that the loss D goes to zero and the loss G goes up. Do you have same issue when train on your dataset? What is the problem? Thanks

simontomaskarlsson commented 5 years ago

Hi @John1231983, When implementing the CycleGAN in Keras we tried a bunch of different training settings, i.e. the ones you see in init function of the CycleGAN class. In the end the settings that are default worked the best for us.

In your case the discriminator "beats" the generator and estimates correctly on the real and synthetic versions of the training images, which eventually prevents the discriminators and generators to improve any further.

Try modifying the model settings, especially the learning rates and perhaps the generator_iterations and discriminator_iterations.

Uparrow0 commented 5 years ago

Dear Professor,I would like to reproduce your results. Is the dataset you are using an open dataset?

simontomaskarlsson commented 5 years ago

Hi @Uparrow0, See https://github.com/simontomaskarlsson/GAN-MRI/issues/5 for my response.

John1231983 commented 5 years ago

@simontomaskarlsson do you know the technical word discrible my issue? I tried increase D loop but it still not solve my issue

simontomaskarlsson commented 5 years ago

@John1231983 I don't think there is a specific word descibing your issue. In my experience these issues are common and things rarely work right away when using a new dataset or model. So first of all make sure you have the settings as you want them, and that the code does what you want. Then try different alterations backed up with scientific motivation. Good luck!