Open johndpope opened 6 months ago
Yes, your idea is great, thanks for the advice! @johndpope Finding more effective and stable ID prior knowledge, as well as ID decoupling methods are the core of the personalized portrait generation task. Everyone is welcome to experiment, discuss and PR.
The core module of VASA-1 is MegaPortraits, a physically meaningful motion and ID decoupling framework. Coincidentally, we are also paying attention to ID consistency maintenance on character video tasks, and have reproduced some MegaPortraits modules. We find that some of the face registration experience here can be borrowed for ID preservation in image generation.
These will be verified in subsequent experiments, and the results and conclusions will be synchronized here should any progress be made.
Been working on this https://github.com/johndpope/MegaPortrait-hack
I believe one of the key authors from Samsung ai labs of megaportraits(now working at Facebook) will open source emoportraits in July.
in mean time -I’m close to recreating entire paper. Just blowing up on training loop.
I'm working on recreating another paper - VASA-1 https://github.com/johndpope/vasa-1-hack I leveraged this codebase to piggy back from https://github.com/yerfor/Real3DPortrait/
it has a face 3d helper
I give this file context to Claude - and simply ask it to leverage the code - to upgrade this codebase
https://drive.google.com/drive/folders/1o4t5YIw7w4cMUN4bgU9nPf6IyWVG1bEk
i attempt to implement some updates on my branch here (but may you guys have more time to look at this) https://github.com/johndpope/consistentid