zhengyuf / STED-gaze

Code for paper 'Self-learning Transformations for Improving Gaze and Head Redirection'
GNU General Public License v3.0
89 stars 23 forks source link

Generating images with random gaze directions #10

Closed ShiweiJin closed 2 years ago

ShiweiJin commented 3 years ago

Hello Yufeng,

Thank you for your excellent work and the code.

I used the published pre-trained models to generate some figures with manually set gaze directions based on the MPIIFazeGaze dataset. The synthetic figures looks great. However, when I utilized my own collected face images as the input, the gaze redirection result is not good. As for some cases, the human skin color or even gender were changed.

What I did was replacing the test_visualize['image_a'] and test_visualize['image_b'] with my own images. And the full-face images was rescaled to 128*128. But I didn't do the normalization step for my own figures.

Can I have some suggestions from you given the situation that synthetic images have kind of large difference to the source images? Thank you again for your help.

Sincerely, Shiwei

zhengyuf commented 3 years ago

Hi Shiwei,

Thank you for your question. The domain gap between synthetic images and real images is indeed an issue. If you would like the method to work on synthetic images, I would suggest re-training the network on synthetic images as well.

Yufeng