Closed ShiweiJin closed 2 years ago
Hi Shiwei,
Thank you for your question. The domain gap between synthetic images and real images is indeed an issue. If you would like the method to work on synthetic images, I would suggest re-training the network on synthetic images as well.
Yufeng
Hello Yufeng,
Thank you for your excellent work and the code.
I used the published pre-trained models to generate some figures with manually set gaze directions based on the MPIIFazeGaze dataset. The synthetic figures looks great. However, when I utilized my own collected face images as the input, the gaze redirection result is not good. As for some cases, the human skin color or even gender were changed.
What I did was replacing the
test_visualize['image_a']
andtest_visualize['image_b']
with my own images. And the full-face images was rescaled to128*128
. But I didn't do the normalization step for my own figures.Can I have some suggestions from you given the situation that synthetic images have kind of large difference to the source images? Thank you again for your help.
Sincerely, Shiwei