dome272 / Diffusion-Models-pytorch

Pytorch implementation of Diffusion Models (https://arxiv.org/pdf/2006.11239.pdf)
Apache License 2.0
1.11k stars 256 forks source link

Training generates images with full red output #21

Open AdamWojtczak opened 1 year ago

AdamWojtczak commented 1 year ago

While training on not changed model with a different dataset (portraits of faces) I am getting bunch of full red outputs: image I also changed the code to train on the same dataset but greyscaled before training and as a result I still get monocolored outputs but this time they are either white or black: image Has anyone had the same issue? Is there something I can do to prevent this?

dome272 commented 1 year ago

Hey, can you try to train on the original dataset I used and tell me if you get the same results or if this training also does not work.

AdamWojtczak commented 1 year ago

These are the results of training on landsacapes dataset. What I changed is the batch size. My GPU has only 4 GB so it has to be only 2. image

stsavian commented 1 year ago

Interesting, I opened a similar issue for the following repository https://github.com/cloneofsimo/minDiffusion/issues/4