Open jetory12 opened 2 years ago
Hi! Yeah, we do not report the MEAD results in the main table since we could only run StyleGAN-V and MoCoGAN-HD on it (DIGAN seems to be expensive to run on it, but we didn't try though). Our FID/FVD scores are specified in raw text in the paper. We do not report FID/FVD for MoCoGAN-HD generations, only for StyleGAN-V:
The metrics scripts are src/scripts/calc_metrics_for_dataset.py
and src/scripts/calc_metrics.py
depending on whether you want to compute between two datasets or between a dataset and a generator. For the datasets — I will share the link tomorrow (it is 170G, so it takes time to find a suitable cloud storage for it). The usage is specified in README.md in the Evaluation section.
I am sorry for replying late. If you have any other questions or additional information — feel free to ask!
Hi, thanks a lot for the reply.
I still have a question to the second part. You said that you used the DIGAN and MoCoGAN-HD official repos to evaluate with your new FVD protocol. I tried combining your protocol with their repos but the numbers are off a lot. Since they use different dataloaders etc. than you, I wanted to ask if you could share your modified repos of DIGAN and MoCoGAN-HD where you included your FVD protocol into their repos? Or basically the exact way you used to evaluate their re-trained models. I want to make sure that I evaluate their models correctly. This would help me a lot to reproduce all numbers, since I want to use your new protocol across all models as you did. No need to upload any datasets, just the modified repos of DIGAN and MoCoGAN-HD would be great somewhere uploaded, if that is possible?
Hi! For their repos, we didn't integrate our FVD evaluation into them, but rather sampled from the models to construct a dataset of fake videos and then used our src/scripts/calc_metrics_for_dataset.py script. So, the only things we changed were sampling procedures (to generate long videos, videos starting at some t, etc.). For DIGAN, we also changed its data loading strategy to select videos uniformly at random (simply with our dataset class). We do not have any repos for this, just a big mess our infrastructure bindings and bash scripts. In case you need our sampling scripts, then here are our versions (sorry for the code quality) of
Also, I have uploaded our version of the MEAD 1024 dataset: https://disk.yandex.ru/d/PACh_RRsVJ93AA (Yandex.Disk split the archive into parts, it's 170G in total).
Hi, you report MEAD results in the paper in the text and compare to MoCoGANHD but I can not find their numbers in the paper. Can you share the full results for MEAD? Did you compare DIGAN on MEAD as well?
Also I want to use your new evaluation protocol and compare to your model, DIGAN and MoCoGAN-HD and want to use your evaluation scripts. Do you mind to share the metrics scripts / changed dataset file you used for the DIGAN and MoCoGAN-HD repos?