openai / guided-diffusion

MIT License
6.03k stars 803 forks source link

Question about FID #77

Open pokameng opened 1 year ago

pokameng commented 1 year ago

Does the **num_samples** affects the FID? I performed the sampling operation on the model lsun_bedroom.pt provided in the paper and then evaluated the generated .npz against the reference file VIRTUAL_lsun_bedroom256.npz and did not get the scores from the article. image My evaluator scripts as following: image

Can you give me some advices? @longouyang @welinder @JoshuaGross @jietang @esigler

LuoBingjun commented 1 year ago

I got the same problem as the following: Inception Score: 2.4110054969787598 FID: 68.9926382683044 sFID: 698.4445687906518 Precision: 0.61 Recall: 0.546

akrlowicz commented 1 year ago

any updates on this?

dineshdaultani commented 8 months ago

I was having similar problem where the FID was very high, i.e., around 150+ for CIFAR-10 dataset. When I included the diffusion flags such as DIFFUSION_FLAGS="--diffusion_steps 1000 --noise_schedule cosine" with the image_sample.py script, it helped me to reduce the FID: 16. Since I am not using the same hyperparameters as CIFAR-10 and training it for only 100k iterations FID values are not so bad.

Reason how I figured out the issue was based on the generated samples. Try visualizing the image samples in generated npz file. If the images look weird there can be something wrong with the sampling script, assuming you use same or similar hyperparameters for training script as compared to the authors suggestion. In that case, you can try the above solution. I hope this helps!

Note: Your DIFFUSION_FLAGS, MODEL_FLAGS might be different than mine, please apply same flags as your training script in the sampling script.