Closed rishabhjain9619 closed 2 years ago
It seems a training set downsampling/compression problem. when computing FID statistics, GFLA seemingly uses a set of special parameters to preprocess the training/validation set. To reproduce GFLA result and to enable a fair comparison, GFLA's author kindly shared their training set and validation set (with preprocessed downsampling) with us, and that's how we get our numbers for 256x256.
We also get the better set of results (12.25 for us and 9.87 for GFLA) as yours by directly loading the highres images and downsampling by bilinear to compute the FID statistics.
Thanks for the quick reply. Can you share the code for downsampling the images or the post-processed images?
I am not sure if we are allowed to share the data that GFLA's authors shared with us. You may want to reach out to GFLA's authors for this. :)
Sure, no issues, thanks for the clarifications
Hi @cuiaiyu ,
Thanks for sharing your work. I was trying to reproduce the FID values for 256*256 images given in the paper using your code. However, I got a lesser FID value of 12.25 instead of 13.10 mentioned in paper.
For glfa also, I am getting a value of 9.87 instead of 10.57 which you also mentioned in gfla comment. Can you let me know how was that issue solved?