XingangPan / GAN2Shape

Code for GAN2Shape (ICLR2021 oral)
https://arxiv.org/abs/2011.00844
MIT License
571 stars 104 forks source link

Issues getting results with a FFHQ based GAN #12

Open NOlivier-Inria opened 3 years ago

NOlivier-Inria commented 3 years ago

Hi, I tried reproducing your celeba results with a GAN trained on the FFHQ dataset. I ran a pretrain on it, and the face wise optimization, but obtain the following results :

The image : 1

stage 0: 1_normal_rotate_stage0_007 stage 2: 1_normal_rotate_stage2_007

Could this be due to using "full" images, and not a cropped version, or not using FFHQ-specific view_mnv.pth and light_mvn.pth networks ? Or perhaps another reason ?

XingangPan commented 3 years ago

@NOlivier-Inria Hi, I haven't try tuning the parameters for FFHQ datasets. But I think one reason why it's hard to work on FFHQ is that the images of FFHQ are carefully aligned so that the trained GAN cannot shift the face. This could make it difficult to produce good 'projected samples'. I think there are two things you may try: 1) Use FFHQ-specific view_mvn.pth. The provided 'view_mvn.pth' file should do. 2) Retrain your GAN on FFHQ, but with a random shift data augmentation strategy. This could possibly make the GAN better fit with our framework.