Hangz-nju-cuhk / Talking-Face-Generation-DAVS

Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)
MIT License
818 stars 173 forks source link

can I add new images into the demo_images folder for testing #17

Closed jianglingling007 closed 5 years ago

jianglingling007 commented 5 years ago

Hi,Huang zhou,I just add some new images into the demo_images for test,and find that the result image fake frames 's variation is not like the four demo images , does this repo code support any other else image test? or Should I do some preprocssing work on my own images ?

Hangz-nju-cuhk commented 5 years ago

Hi, the input image has to go through the alignment procedure. The pre-process code that we used for alignment is provided in this repo.

jianglingling007 commented 5 years ago

Tks for your reply。

发自我的 iPhone

在 2019年3月20日,下午7:55,Hang_Zhou notifications@github.com 写道:

Hi, the input image has to go through the alignment procedure. The pre-process code that we used for alignment is provided in this repo.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

JosephPai commented 4 years ago

Hi, the input image has to go through the alignment procedure. The pre-process code that we used for alignment is provided in this repo.

Hi, is the the alignment in face_align.py ? Can you explain the meaning of points in that function? And where can I get it, as well as its scale. Thanks!

Hangz-nju-cuhk commented 4 years ago

@JosephPai as you can see from the code three_points = np.zeros((3, 2)) three_points[0] = np.array(points[:2]) # the location of the left eye three_points[1] = np.array(points[2:4]) # the location of the right eye three_points[2] = np.array([(points[6] + points[8]) / 2, (points[7] + points[9]) / 2]) # the location of the center of the mouth The points are the human key points with: [left_eye_x, left_eye_y, right_eye_x, right_eye_y, nose_x, nose_y, left_mouth_x, left_mouth_y, right_mouth_x, right_mouth_y].

You can use any facial keypoint estimation code to get them, for example: https://github.com/1adrianb/face-alignment.