Closed quqixun closed 3 years ago
To apply the output of step1, we need to regress the 3DMM parameters of the predicted landmarks and the background video. Then replace the pose parameter of the predicted landmarks with the background video and project the rotated landmark to get the input for vid2vid(lm2map.py). However, due to the copyright resons, we can't provide the 3dmm fitting algorithm.
Got it. Thanks.
How to apply the output (results/target.npy) of step1 to step2?