Open EternalEvan opened 1 year ago
Hi. For evaluation on CelebA-Test (synthetic dataset), CelebA-Test-HQ is used as our reference dataset. For evaluation on real-world datasets, FFHQ is used as our reference dataset.
Thank you for your prompt response! I have another question regarding the FID library you utilized. I've noticed that many projects focusing on image restoration tasks often employ the pytorch-fid repository for calculating FID. Could you please confirm whether you also used this library, either with default settings or any other codes?
I'm eager to replicate your methodology, and aligning with your quantitative metrics is my initial step. Your understanding is greatly appreciated! : )
We use pytorch-fid to calculate FID :)
A follow-up question, if I may – which layer of the inception network did you use?
As both LFW and WIDER have less than 2048 images, we cannot compute FID using the default layer (as mentioned here: https://github.com/mseitzer/pytorch-fid?tab=readme-ov-file#using-different-layers-for-feature-maps).
Hi, thanks for your wonderful work which inspires us so much. However, we wonder how to calculate the FID metric in your paper. In other word, which reference dataset did you choose when computing the results' FID. We reproduct your results which are visually amazing but cannot achieve the FID around 20 in your paper.