oawiles / X2Face

Pytorch code for ECCV 2018 paper
MIT License
246 stars 60 forks source link

Crop size for source and driving image #4

Closed Blade6570 closed 5 years ago

Blade6570 commented 5 years ago

Hi, thank you for releasing the pre-trained model. While I was testing on other videos, the crop size really matters for the quality of results. I tried to crop the faces from the video by randomly changing the rectangle size and chose the one which provides a reasonable result. Could you please mention the exact crop size that you used after detecting faces from dlib? it will be of great help.

oawiles commented 5 years ago

Hi. We got the data from someone else: (which is linked from our website --http://www.robots.ox.ac.uk/~vgg/research/unsup_learn_watch_faces/x2face.html). Unfortunately you'll have to see what they say.

From a quick browse, I believe they say this in their paper: "Since in both datasets, the specified face regions yield a tight face crop, we expand all crops by a factor of ⇥1.6 to incorporate additional context into the face region."