YuelangX / Gaussian-Head-Avatar

[CVPR 2024] Official repository for "Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians"
Other
787 stars 49 forks source link

how to do re-enactment with an mp4 video of myself after training? #30

Open jryebread opened 6 months ago

jryebread commented 6 months ago

Hi, I'm confused on how to just test one of the existing datasets and get a front-facing re-enactment of one of the nersemble avatars using an MP4 input video of myself, can someone guide me on how to do this?

I already trained and ran one of the examples on the mini dataset, but I don't understand how to use my own driving video for re-enactment

like the instructions say "the trained avatar can be reenacted by a sequence of expression coefficients" what does this mean? how can I input my own mp4 video for reenactment? is there a script to convert an mp4 video into the required input the model needs?

jryebread commented 5 months ago

@YuelangX

YuelangX commented 5 months ago

Hi,@jryebread. I provide a instruction to extract 3DMM expression coefficients from monocular video here(https://github.com/YuelangX/Multiview-3DMM-Fitting). You can refer to it.

jryebread commented 5 months ago

@YuelangX Hi thank you, I setup all the files for Multiview, but how do i get the params needed for param_files in reenactment.yml?

The reenactment script asserts that len(params) == len(images) but your multiview preprocessor here only outputs images and cameras

so i am confused on how to get params.npz

https://github.com/YuelangX/Multiview-3DMM-Fitting/blob/main/preprocess/preprocess_monocular_video.py

jryebread commented 5 months ago

it is also needed for pose_code_path

pose_code_path: 'mini_demo_dataset/031/params/0000/params.npz'

NikoBele1 commented 5 months ago

@jryebread you need to run https://github.com/YuelangX/Multiview-3DMM-Fitting?tab=readme-ov-file#multiview-monocular-fitting to generate the landmarks and params