EricDai0 / advdiff

MIT License
24 stars 4 forks source link

Request for MNIST Code and DDPM Pretrained Model, and Guidance on Dataset Transfer #4

Closed GradOpt closed 1 month ago

GradOpt commented 9 months ago

Thank you for your great work. It's a fascinating approach!

I'm currently interested in experimenting with smaller datasets such as MNIST, Fashion MNIST, CIFAR 10/100, and SVHN. I noticed in your paper that you have implemented experiments on MNIST using DDPM with classifier-free guidance. Since these datasets are not entirely compatible with the latent diffusion-based ImageNet implementation, I would greatly appreciate it if you could share the MNIST DDPM pretrained model and the corresponding AdvDiff code.

Additionally, I would be grateful for any guidance on how to transfer this methodology to other datasets, e.g. training DDPM with classifier-free guidance on CIFAR 10/100.

Thanks again for your time and consideration.

EricDai0 commented 1 month ago

You can refer to https://github.com/abarankab/DDPM for the MNIST dataset. However, we recommend you do experiments on high-resolution datasets like ImageNet. For CIFAR 10/100, you can refer to the DiffPure https://github.com/NVlabs/DiffPure.

To use adversarial guidance in different diffusion models, you can refer to line 211-227 and line 239-261 for two adversarial guidance in ldm/models/diffusion/ddim_adv.py. Lines 241-242 are only used in ldm according to the training, you may not need them for other diffusion models. Be careful when setting the preprocess function according to the target classifier. K and s also should be set according to different diffusion models. Moreover, if you want high image quality, you can only use adversarial guidance without the noise sampling guidance (with degradation of black-box transfer performance).

GradOpt commented 1 month ago

You can refer to https://github.com/abarankab/DDPM for the MNIST dataset. However, we recommend you do experiments on high-resolution datasets like ImageNet. For CIFAR 10/100, you can refer to the DiffPure https://github.com/NVlabs/DiffPure.

To use adversarial guidance in different diffusion models, you can refer to line 211-227 and line 239-261 for two adversarial guidance in ldm/models/diffusion/ddim_adv.py. Lines 241-242 are only used in ldm according to the training, you may not need them for other diffusion models. Be careful when setting the preprocess function according to the target classifier. K and s also should be set according to different diffusion models. Moreover, if you want high image quality, you can only use adversarial guidance without the noise sampling guidance (with degradation of black-box transfer performance).

Thank you for your response. I have also noticed the reduced transferability with noise sampling guidance, and may explore this issue further.