dome272 / Diffusion-Models-pytorch

Pytorch implementation of Diffusion Models (https://arxiv.org/pdf/2006.11239.pdf)
Apache License 2.0
1.16k stars 266 forks source link

about the pretrain_mode #6

Open ZhouHaoWa opened 1 year ago

ZhouHaoWa commented 1 year ago

i have tried three pretrain_model, all of them generative close to Gaussian noise,does anyone get the same problem as me?

dome272 commented 1 year ago

What do you mean with "pretrain_model"? Do you mean when you train the models, all you get is noise when sampling?

ZhouHaoWa commented 1 year ago

sorry, i got it wrong, I confused pre-train with checkpoints. I just used three checkpoints you have shared to generative the pic, of course it doesn't work. I would like to take this opportunity to thank you for sharing the code and video explanation, as a beginner, I have learned a lot, thank you very much, I will continue to pay attention to your sharing.

igoindown commented 1 year ago

sorry, i got it wrong, I confused pre-train with checkpoints. I just used three checkpoints you have shared to generative the pic, of course it doesn't work. I would like to take this opportunity to thank you for sharing the code and video explanation, as a beginner, I have learned a lot, thank you very much, I will continue to pay attention to your sharing.

Hi, I'm having the same problem as you, so what should I do to generate the images correctly

ChaunceyWang commented 11 months ago

I trained the model following this project setting, Here is my modes, The generation images seems better than the models given, while the results can also be improved.

unconditional

conditional_cls4


@ZhouHaoWa @dome272 I have the same problem, I used the model given by the link and run the demo code. However, the results were almost Gaussian noise, is there any thing wrong? (Is the model weights given randomly initialized?) ww16

    model = UNet().to(device)
    ckpt = torch.load("unconditional_ckpt.pt")
    model.load_state_dict(ckpt)
    diffusion = Diffusion(img_size=64, device=device)
    x = diffusion.sample(model, n=16)
    plot_images(x)