-
Hi! Thank you for your great work!!
I followed the default `emage.yaml` you provided and only changed `ddp: True`, leaving most other options unchanged. I also used the dataset you provided. However,…
-
Hi!
Recently I am trying to replicate your score. While I am working on the part of SD unlearning, I generate mask, then trained model with Salun, after that I generate a certain number of images t…
-
I only find the unconditional models fid/is calculate codes in sample_diffusion.py. how can i calculate the conditional models' FID/IS score?
-
Do you calculate fid score by comparing the training set and the generated images ? I cannot reproduce the same fid in the paper. And which fid git repo you choose to evaluate the results.
-
Hello @hwalsuklee, thanks for the repo!
Would i be a good idea to also add the FID score and/or inception score to the various experiments?
Thanks!
-
It would be nice the add the FID score to `evaluate` as a metric for generative models.
cc @patrickvonplaten @anton-l
-
Hi,
Thank you so much for your work
How to choose the weight of the best FID score, I used your code to reproduce, none of them reached the performance to the original paper, My model in the F…
-
Hello,
I'm wondering if the models supported in this project have been evaluated to see if their performance replicates the reported metrics in the original papers (primarily FID and CLIP score f…
-
I have retested the FID score by torch-fidelity on FFHQ and CelebA-HQ with the default DDIM algorithm and the recommended steps (200 / 500), and it gives a much worse FID score (about 9.+) than the re…
-
Hello, in the original K-Diffusion paper the authors report FID scores for CIFAR in the low-single-digits range (eg 1.8). However, the FID scores from this repo all give in the high teens: like 27, 3…