Closed jvwilliams23 closed 1 month ago
@Zhendong-Wang
Sorry for the delayed reply. I am not sure what kind of data that you are using and what code and noise setting that you are currently using.
From the observation, one reason could be that your discrinator input is injected with too much noise and also the timestep condition doesn't work well. This makes the discriminator regards noise images are good samples and lead the generator to generate noise images. This depends on the noise scheduling that you are using.
For Diffusion-GAN, one motivation is to mitigate the discriminator overfitting problem in GANs. We gradually increase the noise level so the task for the discriminator is going to be harder and harder, which forms a curriculum learning pipeline. Actually you could also try uniform noise scheduling, that also works well in our experiement.
Thanks for your response!
Hi,
I have tested this on my own dataset. After around 1M iterations, the generator output tends towards outputting pure noise (you can slightly see the pattern of the generated images, but is mainly noise). I am wondering why this is?
As I understood in the Diffusion-GAN paper, training begins with unmodified images, and noise is increased as training progresses. This is the opposite from Instance Noise, where the noise level is highest at initialisation, and then the noise standard deviation is annealed over the training. Could you comment on this?
Many thanks, Josh