gafniguy / 4D-Facial-Avatars

Dynamic Neural Radiance Fields for Monocular 4D Facial Avater Reconstruction
670 stars 67 forks source link

Rigid Transformations #25

Closed HyunsooCha closed 2 years ago

HyunsooCha commented 2 years ago

Hi,

I am trying to run 'real_to_nerf.py' to use my research. I am implementing a face-tracking code using DECA. However, DECA uses camera parameters as (tx, ty, scale). I guess NerFACE uses (tx, ty, tz) for rigid transformations. So, I would like to know how to change the DECA camera parameters to rigid transformations used in NerFACE. The purpose of my question is to make text files - intrinsics.txt, expression.txt, and rigid.txt I appreciate your kindness. Your research is helpful to me in understanding the area of study.

Chopin68 commented 2 years ago

How is your progress and whether you can complete the effect in the paper? I'm stuck at this step

HyunsooCha commented 2 years ago

I used DECA, which can be found on GitHub, as a face tracker. However, DECA's output - pose, expression, and shape tensors do not fit NerFACE. I also tried to apply Expression Net, which is another 3DMM. Unfortunately, this network also does not perfectly fit NerFACE. I think only Face2Face can make three text files.

HyunsooCha commented 2 years ago

And, as you maybe already know, pose and expression tensors are pretty sensitive at a scale of 0.01. So, I guess only Face2Face can make exact output for NerFACE. I hope the author provides the code.

gafniguy commented 2 years ago

@StephenCha It should in principle be able to work with any face tracker (just have to adjust the dimensionality of the expression vector). The authors of IM Avatars from ETH managed to run it with their own tracker (not sure which, but I assume it is based on FLAME).

@yc4ny You are essentially asking how to implement face 3DMM tracking. This is too big of a subject to explain here. The details of our exact face tracker can be found in Face2Face [Thies et al]. We use this as an off-the-shelf preprocessing method.

HyunsooCha commented 2 years ago

I appreciate your reply. Happy day.

sunshineatnoon commented 2 years ago

@StephenCha It should in principle be able to work with any face tracker (just have to adjust the dimensionality of the expression vector). The authors of IM Avatars from ETH managed to run it with their own tracker (not sure which, but I assume it is based on FLAME).

@yc4ny You are essentially asking how to implement face 3DMM tracking. This is too big of a subject to explain here. The details of our exact face tracker can be found in Face2Face [Thies et al]. We use this as an off-the-shelf preprocessing method.

Hi, I'm also trying to use my own tracker. But the learned model simply ignored my expression input. Is there any special requirement or pre-process that has to be carried out on the expression coefficients? I saw that in this line the expression coefficients are divided by three and not sure why.

gafniguy commented 2 years ago

@StephenCha It should in principle be able to work with any face tracker (just have to adjust the dimensionality of the expression vector). The authors of IM Avatars from ETH managed to run it with their own tracker (not sure which, but I assume it is based on FLAME). @yc4ny You are essentially asking how to implement face 3DMM tracking. This is too big of a subject to explain here. The details of our exact face tracker can be found in Face2Face [Thies et al]. We use this as an off-the-shelf preprocessing method.

Hi, I'm also trying to use my own tracker. But the learned model simply ignored my expression input. Is there any special requirement or pre-process that has to be carried out on the expression coefficients? I saw that in this line the expression coefficients are divided by three and not sure why.

The division by 3 is quite meaningless (an attempt to "normalize" the expressions). in the evaluation script there is a line that freezes the expression. Change it to None

sunshineatnoon commented 2 years ago

I have already set the ablate=None.

I think I found the problem, so in the train folder, the images seem out of order, but I ran my tracker on the person_1_train.mp4, thus the expression parameters are not corresponding to the frames. Do you remember the order of the images in the train folder? Thanks

sunshineatnoon commented 2 years ago

Never mind, I found the order in index.npy. Thanks for your time.

Jianf-Wang commented 2 years ago

@StephenCha Hi, sorry to bother you. Have you got the way to produce the transformation matrix in the json file and the focal etc ? I hope to apply this method to my own dataset but I have no idea how to get them. The tool I am using is from https://github.com/philgras/video-head-tracker, but they seems to have a different coordinate.

HulkMaker commented 1 year ago

@Jianf-Wang hey bro, have you successfully bridge the tracking preprocessing requirement with this repo https://github.com/philgras/video-head-tracker ?