Closed SEUvictor closed 2 years ago
Is it my mistake? I operate like this. 1.run fitting.py to get src_headpose.txt/src_landmark.txt/drv_headpose.txt/drv_landmark.txt. 2.run reenact.py and use txt files above to get the final reenacted result.
I think you refer to the expression transfer seems weak. This is partly because of the use DECA 3DMM fitting method. If using our 3DMM fitting method, the performance would be better. Additionally, our current method cannot reproduce the eyebrow movement due to the limitation of 3DMM fitting. It is also worth noting that we have already achieved SOTA performance on high-resolution images in real-time on the one-shot track. And all the existing methods cannot reproduce the winkle caused by expression change based on only one input image.
Please allow me to confirm again. Is the "src_header.txt/src_landmark.txt/drv_header.txt/drv_landmark.txt" of the source image and the driving image obtained through "fitting.py"? And if it is a driving video, the txt files of each frame should also be obtained through "fitting.py"?
yes! you are right!
I use this picture above as the source image. and use this picture above as the driving image. The character features are relatively similar, but the result is very poor.Like the following figure. If the source image remains unchanged, the driving image is changed to the following figure. The result becomes worse, as shown in the following figure.