Open guanzhenghua opened 1 year ago
Hey,
did you change the seed?
If not, please change torch.manual_seed(0)
to eg. torch.manual_seed(123)
(line 21 and 37)
Oh that was stupid of me. I removed the seed and did generate a different picture. Thank you for your time to answer this question.
Oh that was stupid of me. I removed the seed and did generate a different picture. Thank you for your time to answer this question.
No problem, I should have commented that more clearly in the code and I'm glad to hear that it works now :)
Where did you found the checkpoints? I didn't found any pretrained checkpoints on the github, and huggingface is unavailable.
Oh that was stupid of me. I removed the seed and did generate a different picture. Thank you for your time to answer this question.
Hii, I'm working with the sample_dataset.py file and I'm finding that it generates a lot of images with a high repetition rate, have you resolved this issue?
Thank you very much for your selfless dedication and providing checkpoint. I successfully generated images in sample.py using the checkpoint you provided. Thanks again for your work. But the odd thing is that sample.py generates the same picture every time, which is strange because the diffusion model should generate different pictures. Because of the high quality of the images I initially thought it was a fixed location image from the data set. However, it is found in the code that the images are denoised by the diffusion model and decoded by VAE. But generating the same picture every time is confusing to me. Every time I run sample.py it produces these 16 images Thank you very much for your answer.