https://github.com/KwaiVGI/LivePortrait has trained a better face vid2vid model from ground up. However, the official repo provides only video-driven generation. That's what SadTalker comes into play. Is there any plan to integrate the advantage from both SadTalker and Live Portrait?
https://github.com/KwaiVGI/LivePortrait has trained a better face vid2vid model from ground up. However, the official repo provides only video-driven generation. That's what SadTalker comes into play. Is there any plan to integrate the advantage from both SadTalker and Live Portrait?