maum-ai / faceshifter

Unofficial PyTorch Implementation for FaceShifter (https://arxiv.org/abs/1912.13457)
BSD 3-Clause "New" or "Revised" License
610 stars 114 forks source link

Training with only CelebHQ dataset #13

Closed Qiulin-W closed 5 months ago

Qiulin-W commented 3 years ago

Thanks for the great work!

When I try to train the AEI-Net with 30k images from celebHQ dataset using 6 P40 32G GPUs, I got the training curve as below: Screenshot from 2020-11-23 11-40-54 Screenshot from 2020-11-23 11-41-05 Screenshot from 2020-11-23 11-41-17 Screenshot from 2020-11-23 11-41-25 Screenshot from 2020-11-23 11-41-34 Screenshot from 2020-11-23 11-41-41

All the other setting are set by default and the generated swap faces are also weird: 00001 00002 output2

Should I continue training or any sugguestions? Thanks in advance!

usingcolor commented 3 years ago

Hi! Thanks for your compliments. Training with only CelebA-HQ is quite a dangerous choice. The dataset bias can be affected to total loss. In your loss graph, reconstruction loss is very high. Maybe you should try to increase the coefficient of reconstruction loss if you wanna train with only CelebA-HQ.

Be careful about overfitting.

Qiulin-W commented 3 years ago

Hi! Thanks for your compliments. Training with only CelebA-HQ is quite a dangerous choice. The dataset bias can be affected to total loss. In your loss graph, reconstruction loss is very high. Maybe you should try to increase the coefficient of reconstruction loss if you wanna train with only CelebA-HQ.

Be careful about overfitting.

Thank you so much for your reply!

By the way, what is the expected value of reconstruction loss of your well-trained model? If possible, could you share your loss graph as a reference? Did you set grad_clip to zero?

usingcolor commented 3 years ago
  1. reconstruction loss should be around 1e-3~1e-4 for a well-trained model. You can see the reconstructed example in my colab example which is presently added.
  2. No. I can't share the loss graph.
  3. Yes. I didn't clip the gradient. The grad_clip option to zero means no clip.
coranholmes commented 3 years ago

Could you please tell me how long does it take to finish the training?

Qiulin-W commented 3 years ago

Could you please tell me how long does it take to finish the training?

For celebHQ dataset only, it takes 2-3 hours for an epoch.

hanikh commented 3 years ago

Could you please tell me how long does it take to finish the training?

For celebHQ dataset only, it takes 2-3 hours for an epoch.

can you benefit from multi-GPU training? does it really increase the training speed?

rainq22 commented 6 months ago

已经过去四年了,请问下你能分享下arcface.pth文件吗,链接已经失效了,感激不尽