Closed anguoyang closed 2 years ago
Hi, thanks for the comment. We do not enable the --flip
argument when computing inception features, so no flipping is used in both cases. Please refer to the command in the README when using calc_inception.py
.
Hi, thanks for your feedback, another issue, it seems that you used the same data loader for evaluation(or part of the train loader), do you think it is better to split for train/val?
We used the same practice to calculate FIDs as StyleGAN2, where the training set is used for FID calculation.
thanks, got it. by the way, I tried resnet50 encoder to replace the mapping network in my face image enhancement task(keep the synthesis network), I found the result is much worse(compare with GPEN which modified from styleGAN2), the result image is similar with ESRGAN's, how to say, it is clear enough but obvious fake for human eyes, not sure how to slim the default StyleGAN2 mapping network...
Hi @junyanz @songhan , I found a possible bug, not sure, if not please correct me, thank you. when you calculate the fid, you use the transform with random flip: https://github.com/mit-han-lab/anycost-gan/blob/master/tools/calc_inception.py#L53 but when training code, there is just clamp, no flip: https://github.com/mit-han-lab/anycost-gan/blob/master/tools/train_gan.py#L279 That maybe lead to wrong evaluation result