gafniguy / 4D-Facial-Avatars

Dynamic Neural Radiance Fields for Monocular 4D Facial Avater Reconstruction
670 stars 67 forks source link

How to get expression statistics? #32

Open soom1017 opened 2 years ago

soom1017 commented 2 years ago

Thanks for previous support on making continuous video.

In the real_to_nerf.py code, it says I need "expressions.txt", "rigid.txt". Also in the json file of the person_1 dataset, there are "expressions" values that is already prepared.

How can I get these values from my own video or image sequence? I searched for face2face model code, and there's nothing but demo code using pix2pix model.

gafniguy commented 2 years ago

You need a face tracker, you can try some open source ones, eg [](url)

yangqing-yq commented 1 year ago

You need a face tracker, you can try some open source ones, eg @gafniguy as this vht repo "https://github.com/philgras/video-head-tracker", it outputs totally different dimension of expression vector (which is 100d), how to align this with nerface requirement (expression is 76D vector)? Can I just change the video-head-tracker's output dimension to match your input requirement?

yangqing-yq commented 1 year ago

@soom1017 hey, bro! I am also stuck in the step of how to generate these "expressions.txt", "rigid.txt". Have you finally figured it out?

soom1017 commented 1 year ago

@soom1017 hey, bro! I am also stuck in the step of how to generate these "expressions.txt", "rigid.txt". Have you finally figured it out?

Sorry for that. As my team decided not to use nerf-like models, I have nothing proceeded.

gafniguy commented 1 year ago

@yangqing-yq yes, you can just change the 76 to the dimension of the FLAME expression vector. With the rigid pose you have to be a bit more careful, as FLAME has a neck parameter as well, make sure you take that into account when you save the R|T of the head)