royorel / Lifespan_Age_Transformation_Synthesis

Lifespan Age Transformation Synthesis code
Other
582 stars 132 forks source link

Why does my model always generate video with only gray background when testing? #30

Closed glorioustory closed 1 year ago

glorioustory commented 2 years ago

Hi, @royorel , I have prepared about 500 pictures for each age, so there are about 1500 in train0-2 subdir, 2000 in train3-6 subdir , etc. there are 10 levels in total. Start training: python train.py --gpu_ids 0 --dataroot ./datasets/males --name males_model --batchSize 2 --verbose After one epoch on Linux,loss_log.txt: (epoch: 1, iters: 22440, time: 5.531) loss_G_Adv: 6.233 loss_G_Cycle: 2.508 loss_G_Rec: 2.522 loss_G_identity_reconst: 0.000 loss_G_age_reconst: 0.480 loss_D_real: 0.003 loss_D_fake: 0.002 loss_D_reg: 0.020 The three models generated are about the same size as your pre-trained models, and then tested on windows,use: python test.py --name males_model --which_epoch latest --display_id 0 --traverse --interp_step 0.05 --image_path_file test/111.jpg --make_video --in_the_wild --verbose However, the generated video have only gray background and no face. It's ok to change the path to your pre-trained models. The two model parameters which I print are the same,the model my program generate aboat 80M. Why is this? look forward to your reply.

royorel commented 2 years ago

1 epoch is not nearly enough to train the model. That's why you're getting just a gray image. We trained the model for 400 epochs.

Regardless, feel free to use the pre-trained models if you need to.

glorioustory commented 2 years ago

@royorel ,thanks for your reply. I used the pre-model and input image of Chinese people face , sometimes the output may be image of American face, like this: input image: 222 one image of the output video: 333

It is estimated that the problem is the data set,so I prepared a new data set of Chinese images. I will continue the training for more epochs. Thank you again for your answer.