WeilunWang / semantic-diffusion-model

Official Implementation of Semantic Image Synthesis via Diffusion Models
228 stars 24 forks source link

the performance of ddpm sampling is totally different from that of ddip sampling, reason? #14

Closed fido20160817 closed 1 year ago

fido20160817 commented 1 year ago

group A: ddpm sampling 1000_use_ddim_False_0

ddim sampling 1000_use_ddim_True_0

group B: ddpm sampling 10017_use_ddim_False_0

ddim sampling 10017_use_ddim_True_0

group C: ddpm sampling 10037_use_ddim_False_0

ddim sampling 10037_use_ddim_True_0

group D: ddpm sampling 10045_use_ddim_False_0

ddim sampling 10045_use_ddim_True_1

any tips?

miquel-espinosa commented 1 year ago

Hi @fido20160817 . Have you figured out a reason? I am not an expert on this, but reading on this here it seems that DDIM sampling has the property of good latent interpolation between samples, maybe with the trade-off of worse diversity. But I would be happy to know further thoughts on this, if you have any.

Also, may I ask how long did it take you to train the model, and much compute (how many GPUs) did you require? Thanks.

fido20160817 commented 1 year ago

the training detail is similar to that in guided diffusion. see here for more https://github.com/openai/guided-diffusion/issues/100