Open StevenYounng opened 5 months ago
To our knowledge, the training resolution of GANimation's pre-trained model is 128px (with cropped face images), the official SimSwap's pre-trained model is 224px (by observation it also works on 256px during inference), and StarGAN releases two pre-trained models (128px and 256px). So we conducted experiments at both resolutions of 128px and 256px and did not retrain these pre-trained models. You can try to reproduce and retrain the model at the resolution you prefer, as well as any other generative models, by yourself. Good luck!
First of all, thanks for your answer! From the data in Sepmark and the GANimation dataset you provided, it seems that the pretrained model of GANimation supports 256px. Does this means you test 256px images on 128px pre-trained model.
Yes, you're right. The 128px pre-trained GANimation does support 256px inputs, but the visual results may not be good, as you provided examples. We only take some representative Deepfakes as black box distortions in the noise layer, and the training or retraining of the generative model is somewhat beyond the scope of this work.
ok, thanks for your answer! Best wishes!
Have you ever encountered this problem when using GANimation's pre-trained model? How can I solve it?