google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.79k stars 5.18k forks source link

FaceMesh - Avatar puppeteering using blendshapes #1678

Closed estiez-tho closed 3 years ago

estiez-tho commented 3 years ago

Hi, I've recently experimented with the facemesh and face geometry module. I'm trying to implement a blendshape based model, in order to control virtual avatars (like the Animojis on Iphone X for instance). I've come across this article on the Google AI blog article presenting the Mediapipe Iris detection module.

in which such an avatar is presented.

And I have also found this paper (made by Google Engineers, in June 2020), which describes the model used in the facemesh module, and in which it is mentionned that a blendshape model is used to control the said avatar (page 3, in the puppeteering section).

I was wondering if this blendshape model will be released any time soon, and if there are any resources to understand the model used. Also, what are the blendshapes used for this model ?

Thanks in advance.

kostyaby commented 3 years ago

Hey @estiez-tho!

As you correctly observed, our face blendshape prediction technology used for the GIFs is not open-sourced in MediaPipe yet. I'll defer the question of whether it'll be OSSed and the timeline to @chuoling and @mgyong

mgyong commented 3 years ago

@estiez-tho Sorry currently no plans to open source blendshape tech.

Zju-George commented 3 years ago

@mgyong Is the output of face_geometry pipeline only a rigid transformation of the canonical face mesh? Or it contains nonlinear deformation such as mouse open-close or eye blink motion?

kostyaby commented 3 years ago

Hey @Zju-George,

At this point, it's only a rigid transformation of the canonical face mesh. It aims not to react to facial expression changes (like opening / closing mouth or eye blinking), just on the head pose changes

Zju-George commented 3 years ago

@kostyaby I see. Thank you for your reply!

wingdi commented 3 years ago

@mgyong

Hi, this is my 3d face model's morph targets:
"targetNames" : [
                    "Face.M_F00_000_00_Fcl_ALL_Neutral",
                    "Face.M_F00_000_00_Fcl_ALL_Angry",
                    "Face.M_F00_000_00_Fcl_ALL_Fun",
                    "Face.M_F00_000_00_Fcl_ALL_Joy",
                    "Face.M_F00_000_00_Fcl_ALL_Sorrow",
                    "Face.M_F00_000_00_Fcl_ALL_Surprised",
                    "Face.M_F00_000_00_Fcl_BRW_Angry",
                    "Face.M_F00_000_00_Fcl_BRW_Fun",
                    "Face.M_F00_000_00_Fcl_BRW_Joy",
                    "Face.M_F00_000_00_Fcl_BRW_Sorrow",
                    "Face.M_F00_000_00_Fcl_BRW_Surprised",
                    "Face.M_F00_000_00_Fcl_EYE_Angry",
                    "Face.M_F00_000_00_Fcl_EYE_Close",
                    "Face.M_F00_000_00_Fcl_EYE_Close_R",
                    "Face.M_F00_000_00_Fcl_EYE_Close_L",
                    "Face.M_F00_000_00_Fcl_Eye_Fun",
                    "Face.M_F00_000_00_Fcl_EYE_Joy",
                    "Face.M_F00_000_00_Fcl_EYE_Joy_R",
                    "Face.M_F00_000_00_Fcl_EYE_Joy_L",
                    "Face.M_F00_000_00_Fcl_EYE_Sorrow",
                    "Face.M_F00_000_00_Fcl_EYE_Surprised",
                    "Face.M_F00_000_00_Fcl_EYE_Spread",
                    "Face.M_F00_000_00_Fcl_EYE_Iris_Hide",
                    "Face.M_F00_000_00_Fcl_EYE_Highlight_Hide",
                    "Face.M_F00_000_00_Fcl_EYE_Extra",
                    "Face.M_F00_000_00_Fcl_MTH_Up",
                    "Face.M_F00_000_00_Fcl_MTH_Down",
                    "Face.M_F00_000_00_Fcl_MTH_Angry",
                    "Face.M_F00_000_00_Fcl_MTH_Neutral",
                    "Face.M_F00_000_00_Fcl_MTH_Fun",
                    "Face.M_F00_000_00_Fcl_MTH_Joy",
                    "Face.M_F00_000_00_Fcl_MTH_Sorrow",
                    "Face.M_F00_000_00_Fcl_MTH_Surprised",
                    "Face.M_F00_000_00_Fcl_MTH_SkinFung",
                    "Face.M_F00_000_00_Fcl_MTH_SkinFung_R",
                    "Face.M_F00_000_00_Fcl_MTH_SkinFung_L",
                    "Face.M_F00_000_00_Fcl_MTH_A",
                    "Face.M_F00_000_00_Fcl_MTH_I",
                    "Face.M_F00_000_00_Fcl_MTH_U",
                    "Face.M_F00_000_00_Fcl_MTH_E",
                    "Face.M_F00_000_00_Fcl_MTH_O",
                    "Face.M_F00_000_00_Fcl_HA_Hide",
                    "Face.M_F00_000_00_Fcl_HA_Fung1",
                    "Face.M_F00_000_00_Fcl_HA_Fung1_Low",
                    "Face.M_F00_000_00_Fcl_HA_Fung1_Up",
                    "Face.M_F00_000_00_Fcl_HA_Fung2",
                    "Face.M_F00_000_00_Fcl_HA_Fung2_Low",
                    "Face.M_F00_000_00_Fcl_HA_Fung2_Up",
                    "Face.M_F00_000_00_Fcl_HA_Fung3",
                    "Face.M_F00_000_00_Fcl_HA_Fung3_Up",
                    "Face.M_F00_000_00_Fcl_HA_Fung3_Low",
                    "Face.M_F00_000_00_Fcl_HA_Short",
                    "Face.M_F00_000_00_Fcl_HA_Short_Up",
                    "Face.M_F00_000_00_Fcl_HA_Short_Low",
                    "EyeExtra_01.M_F00_000_00_EyeExtra_On"
                ]

Can you general idea of telling me how to set the morph weights value using facial landmarks ? I know the basic method is to calculate difference value of certain landmarks. But there are too many landmarks changed, is there any algorithm to calculate ?

tu-nv commented 2 years ago

I am also having a similar problem, and this one looks promising (I haven't tried it yet though) https://github.com/yeemachine/kalidokit

opchronatron commented 2 years ago

For anyone looking for a plug-and-play blendshape sdk you can get it here. https://joinhallway.com/ Uses the ARKit 52 standard.

GeorgeS2019 commented 2 years ago

@tu-nv @wingdi Please support this feature request for ARKit 52 blendshapes

brunodeangelis commented 2 years ago

I am also having a similar problem, and this one looks promising (I haven't tried it yet though) https://github.com/yeemachine/kalidokit

I've implemented that solution and it outputs much less data than something like Hallway. I also found it not too reliable, but I could be wrong about that.

Since I haven't been given access to the Hallway SDK yet, I went with mocap4face and it seems to be the best so far.

emphaticaditya commented 2 years ago

mocap4face is shutting down its sdk @brunodeangelis

xuyixun21 commented 1 year ago

我也有类似的问题,这个看起来很有希望(虽然我还没有尝试过)https://github.com/yeemachine/kalidokit

我已经实施了该解决方案,它输出的数据比 Hallway 之类的要少得多。我也发现它不太可靠,但我可能是错的。

因为我还没有获得访问 Hallway SDK 的权限,所以我选择了mocap4face,它似乎是迄今为止最好的。

I am also having a similar problem, and this one looks promising (I haven't tried it yet though) https://github.com/yeemachine/kalidokit

I've implemented that solution and it outputs much less data than something like Hallway. I also found it not too reliable, but I could be wrong about that.

Since I haven't been given access to the Hallway SDK yet, I went with mocap4face and it seems to be the best so far.

could you help me?supply me the code of mocap4face,because mocap4face is shutting down now.

brunodeangelis commented 1 year ago

could you help me?supply me the code of mocap4face,because mocap4face is shutting down now.

It's been a few months, and I don't remember why but I didn't use mocap4face in the end. I used Hallway's desktop app which allows for OSC data streaming. That was the solution to my intended use case.

baronha commented 1 year ago

Any other solution for 52 blendshape support? I have a problem here

metamultiverse commented 1 year ago

Did any get update on this issue? I am interested for same solution. Hope they make it open source soon

huhai463127310 commented 1 year ago

Did any get update on this issue? I am interested for same solution. Hope they make it open source soon

see https://github.com/keijiro/FaceMeshBarracuda/issues/24#issue-1618129677

AlexisTM commented 1 year ago

NOTE: the new mediapipe task vision system supports the blendshapes natively

GeorgeS2019 commented 7 months ago

will continue to track this issue here