radekd91 / emoca

Official repository accompanying a CVPR 2022 paper EMOCA: Emotion Driven Monocular Face Capture And Animation. EMOCA takes a single image of a face as input and produces a 3D reconstruction. EMOCA sets the new standard on reconstructing highly emotional images in-the-wild
https://emoca.is.tue.mpg.de/
Other
687 stars 89 forks source link

Hi, how to convert emoca expression latent code to AppleARKit 52 blendshapes? #9

Open lucasjinreal opened 2 years ago

lucasjinreal commented 2 years ago

Hi, how to convert emoca expression latent code to AppleARKit 52 blendshapes?

radekd91 commented 2 years ago

Hi, I have never used AppleARKit so I don't know what their blendshapes are exactly but if they are a standard 3DMM, I'm afraid there's no easy way to do it. Their mesh will have a different topology. You would have to predict the FLAME mesh with EMOCA and then fit your AppleARKit model into it.

lucasjinreal commented 2 years ago

@radekd91 thanks for replying. The main reason for I asking this, is because actually in industrial, there are many 3d models using apple standared as blendhsapes, so, if emoca can predict 3dmm from single rgb image, it can be used to driven the 3d model directly.

Currently, do u think is there a possible way to do it if I have a apple 52 blendshapes based model already?

radekd91 commented 2 years ago

It is possible. You if you have both blendshapes available, you can of course fit it to a mesh that is an output of the other. The fitting itself is a non-trivial process but is of course possible.

Daksitha commented 2 years ago

Hi @jinfagang, did you find a way to achieve this by converting FLAME (3DMM) to AppleARKit 52? Thanks

semchan commented 2 years ago

hi,did you find the way to achieve this? thanks a lot.

lucasjinreal commented 2 years ago

no