NetEase-GameAI / Face2FaceRHO

The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)
BSD 3-Clause "New" or "Revised" License
213 stars 35 forks source link

poor result #9

Closed SEUvictor closed 2 years ago

SEUvictor commented 2 years ago

30 I use this picture above as the source image. driving and use this picture above as the driving image. The character features are relatively similar, but the result is very poor.Like the following figure. result If the source image remains unchanged, the driving image is changed to the following figure. driving2 The result becomes worse, as shown in the following figure. result2

SEUvictor commented 2 years ago

Is it my mistake? I operate like this. 1.run fitting.py to get src_headpose.txt/src_landmark.txt/drv_headpose.txt/drv_landmark.txt. 2.run reenact.py and use txt files above to get the final reenacted result.

NetEase-GameAI commented 2 years ago

I think you refer to the expression transfer seems weak. This is partly because of the use DECA 3DMM fitting method. If using our 3DMM fitting method, the performance would be better. Additionally, our current method cannot reproduce the eyebrow movement due to the limitation of 3DMM fitting. It is also worth noting that we have already achieved SOTA performance on high-resolution images in real-time on the one-shot track. And all the existing methods cannot reproduce the winkle caused by expression change based on only one input image.

SEUvictor commented 2 years ago

Please allow me to confirm again. Is the "src_header.txt/src_landmark.txt/drv_header.txt/drv_landmark.txt" of the source image and the driving image obtained through "fitting.py"? And if it is a driving video, the txt files of each frame should also be obtained through "fitting.py"?

NetEase-GameAI commented 2 years ago

yes! you are right!