Open hse1032 opened 9 months ago
Thanks for your interest in our project!
We consistently use the same evaluation module for all our numerical results (see eval.py
). For real samples, we use the official data preprocess script for each compared method, if using the provided pretrained checkpoint. I am pretty sure that the metrics of EG3D is calculated using the official data preprocess script of it, which re-crops the original photos of FFHQ. Note that the FID-20K number is calculated between 20k real and 20k fake samples.
Thank you for your prompt reply!
Sorry for my confusion. I misunderstood that FID-20K is obtained by comparing 20K fake images with full images of real dataset, as done in the FID implementation of EG3D.
I will try to reproduce the number reported in your paper. Thanks again,
Hi, sorry for the inconvenience.
I have a few more questions about the evaluation protocol.
EG3D seems to sample camera poses from the original distribution (e.g. camera poses from FFHQ images). Differently, GRAM_HD randomly samples the camera pose from pre-defined distribution. Did you evaluate the EG3D as their original protocol, or randomly sample the pose from predefined distribution?
In eval.py, the default parameters of the number of images and image size are 10K and 128. FID-20K uses 20K images, so I assume that the number of images should be 20K, but for image size, what should I use (e.g. 128? or 256?)
Thanks,
Hello. First of all, thank you for sharing your valuable codebase!
I have some question about the experimental results in Table. 1 of your paper.
I want to know the experimental settings to compute the numbers of Table. 1. FID-20K of EG3D is 8.72, and I wonder that what dataset you use for computing it.
I guess that you may use the official weight of EG3D and compute FID-20K against real images obtained by preprocessing of GRAM (https://github.com/microsoft/GRAM).
I hope this question does not bother you too much.
Thanks,