ThisisBillhe / EfficientDM

[ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models"
MIT License
49 stars 3 forks source link

How to reproduce the results in the paper #9

Open csguoh opened 3 months ago

csguoh commented 3 months ago

Hi, sorry to bother you.

In the released code of smaple_lora_int_model.py, is seems only one image is sampled as a toy example. How to reproduce the results of 500K samples and save them as npz format, to reproduce results reported in the paper?

If it is convenient to share the related code? Thank you very much !

csguoh commented 3 months ago

I also have another question, why the DDIM of ImageNet uses 20 timesetps while the code released seems uses more timestep 100? Does this modification influence the performance?

ThisisBillhe commented 3 months ago

Hi, you may refer to another issue to reproduce the results. Also, the steps of diffusion models can be adjusted as you wish.

csguoh commented 3 months ago

I will try it, thanks for your reply!

csguoh commented 3 months ago

Hi, authors!

I follow the instruction in README, and use the released code with 100 timesteps using DDIM on ImageNet, and use the guided-diffusion codebase for evaluate. However, I still can not reproduce the results in Table2. Since the 50000 smapling is too long, I use 30000 instead. The results of W4A4 (the default setup in the code) of my reproduce is as follows: Snipaste_2024-06-09_14-35-15

I also use the pre-trained ckpt (20 DDIM steps) you have provided and the results are as follows: image

The results reported in the paper is 250.90 || 6.17 || 7.75.

I suspect the above failure of reproduce might be caused by the sample generation process. I write the sampling code by myself following the smaple_lora_inimodel. But it seems not work well for reproduce. Could you give me some help about this? Thx!