Open NOlivier-Inria opened 3 years ago
@NOlivier-Inria Hi, I haven't try tuning the parameters for FFHQ datasets. But I think one reason why it's hard to work on FFHQ is that the images of FFHQ are carefully aligned so that the trained GAN cannot shift the face. This could make it difficult to produce good 'projected samples'. I think there are two things you may try: 1) Use FFHQ-specific view_mvn.pth. The provided 'view_mvn.pth' file should do. 2) Retrain your GAN on FFHQ, but with a random shift data augmentation strategy. This could possibly make the GAN better fit with our framework.
Hi, I tried reproducing your celeba results with a GAN trained on the FFHQ dataset. I ran a pretrain on it, and the face wise optimization, but obtain the following results :
The image :![1](https://user-images.githubusercontent.com/73844477/115417346-d4422c80-a1f8-11eb-86e5-b9de4e75e11e.png)
stage 0:
stage 2: ![1_normal_rotate_stage2_007](https://user-images.githubusercontent.com/73844477/115390820-fe86f080-a1de-11eb-8290-59c30d5d213c.png)
Could this be due to using "full" images, and not a cropped version, or not using FFHQ-specific view_mnv.pth and light_mvn.pth networks ? Or perhaps another reason ?