explainingai-code / DDPM-Pytorch

This repo implements Denoising Diffusion Probabilistic Models (DDPM) in Pytorch
43 stars 6 forks source link

Noise on all generated images #3

Closed exponentialXP closed 5 months ago

exponentialXP commented 5 months ago

image image (Epoch 5)

No matter how many epochs I train the model for, I keep getting noise on my images. I think this is because there is still noise on timestep 0 image Note: I still get noise on my image even with 1000 timesteps instead of 300

explainingai-code commented 5 months ago

Hello @exponentialXP,

Can you confirm few things: Is this the celeb dataset that you are training on? and for how many epochs you have trained this ? Is your celeb dataset class scaling the images from -1 to 1 before adding noise ? The inference code(in case you have modified the existing repo code) that you are using always clamps the output to -1 to 1 before saving?

exponentialXP commented 5 months ago

Hello @exponentialXP,

Can you confirm few things: Is this the celeb dataset that you are training on? and for how many epochs you have trained this ? Is your celeb dataset class scaling the images from -1 to 1 before adding noise ? The inference code(in case you have modified the existing repo code) that you are using always clamps the output to -1 to 1 before saving?

All the hyperparameters except batch size are the same as yours and the code is mostly the same as yours. I trained the above for 5 epochs but even if I train for 20 the noise is still there. I have done all the above and I am training on 30,000 32x32 normal CelebHQ images.

My all code is here, the scheduler.py is where I think the issue is coming from which is exactly the same as the one in your repository: https://github.com/exponentialXP/diffusion

Thank you so much for your help :)

explainingai-code commented 5 months ago

In line https://github.com/exponentialXP/diffusion/blob/main/train.py#L92 before applying the reverse transform, you would need to clamp the output. Like its done in sample_ddpm.py code here - https://github.com/explainingai-code/DDPM-Pytorch/blob/main/tools/sample_ddpm.py#L32

That should ensure that the artifacts you are seeing is not there as its probably because of some x0 output pixels overshooting the -1 to 1 range. Can you add that clamping and confirm if that improves the final output or not.

exponentialXP commented 5 months ago

In line https://github.com/exponentialXP/diffusion/blob/main/train.py#L92 before applying the reverse transform, you would need to clamp the output. Like its done in sample_ddpm.py code here - https://github.com/explainingai-code/DDPM-Pytorch/blob/main/tools/sample_ddpm.py#L32

That should ensure that the artifacts you are seeing is not there as its probably because of some x0 output pixels overshooting the -1 to 1 range. Can you add that clamping and confirm if that improves the final output or not.

Yep, seems to work. Thank you for your help! :)