kwonminki / Asyrp_official

official repo for Asyrp : Diffusion Models already have a Semantic Latent Space (ICLR2023)
MIT License
249 stars 20 forks source link

Transfer Guided Diffusion U-Net to your Diffusion Object and back #13

Open sidney1505 opened 1 year ago

sidney1505 commented 1 year ago

Hi,

first thank you for your amazing work!

However, I have some questions in part related to the other issues here.

1) How can I do standard inference with the pre-trained models? E.g. for celebahq_p2.pt the model was supposedly trained with the code of P2 weighting. But the code in the original P2 weighting repository has a totally different structure than your code base. If you load the checkpoint you get a models.ddpm.diffusion.Diffusion object, but this is totally different from what other forks of guided diffusion like P2 weighting understand under a Diffusion object and rather has the variables and the interface of the U-Net only, but without information about the actual diffusion process anymore. If one wraps around a models.guided_diffusion.gaussian_diffusion.GaussianDiffusion then one can execute p_sample_loop successfully, but the results of the reverse diffusion process is random noise. How can I test the pretrained models without directly executing Asyrp only doing normal reverse diffusion?

2) The same problem also exists in the other direction as already mentioned in #10 - you can train a models.guided_diffusion.unet.UNetModel with the common guided diffusion code (e.g. also the P2 variant), but how are you using this U-Net now in your code?

3) I guess the meta-question is, how am I doing the transfer between your models.ddpm.diffusion.Diffusion object and the models.guided_diffusion.unet.UNetModel in either direction?

Best, Sidney