mit-han-lab / anycost-gan

[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
https://hanlab.mit.edu/projects/anycost-gan/
MIT License
776 stars 98 forks source link

is this a bug? #27

Closed anguoyang closed 1 year ago

anguoyang commented 1 year ago

Hi @junyanz @songhan , I found a possible bug, not sure, if not please correct me, thank you. when you calculate the fid, you use the transform with random flip: https://github.com/mit-han-lab/anycost-gan/blob/master/tools/calc_inception.py#L53 but when training code, there is just clamp, no flip: https://github.com/mit-han-lab/anycost-gan/blob/master/tools/train_gan.py#L279 That maybe lead to wrong evaluation result

tonylins commented 1 year ago

Hi, thanks for the comment. We do not enable the --flip argument when computing inception features, so no flipping is used in both cases. Please refer to the command in the README when using calc_inception.py.

anguoyang commented 1 year ago

Hi, thanks for your feedback, another issue, it seems that you used the same data loader for evaluation(or part of the train loader), do you think it is better to split for train/val?

tonylins commented 1 year ago

We used the same practice to calculate FIDs as StyleGAN2, where the training set is used for FID calculation.

anguoyang commented 1 year ago

thanks, got it. by the way, I tried resnet50 encoder to replace the mapping network in my face image enhancement task(keep the synthesis network), I found the result is much worse(compare with GPEN which modified from styleGAN2), the result image is similar with ESRGAN's, how to say, it is clear enough but obvious fake for human eyes, not sure how to slim the default StyleGAN2 mapping network...