jixinya / EVP

Code for paper 'Audio-Driven Emotional Video Portraits'.
301 stars 49 forks source link

Are there any missing steps between step1 and step2 of testing? #1

Closed quqixun closed 3 years ago

quqixun commented 3 years ago

How to apply the output (results/target.npy) of step1 to step2?

jixinya commented 3 years ago

To apply the output of step1, we need to regress the 3DMM parameters of the predicted landmarks and the background video. Then replace the pose parameter of the predicted landmarks with the background video and project the rotated landmark to get the input for vid2vid(lm2map.py). However, due to the copyright resons, we can't provide the 3dmm fitting algorithm.

quqixun commented 3 years ago

Got it. Thanks.