Closed GeorgeS2019 closed 1 year ago
Relevant attempt by VRM communities to adopt the ARKit 52 blendshapes
Hi @GeorgeS2019 , Blendshapes come from Xeno (https://drive.google.com/open?id=1f030M8gbXgJN-5-JhcPkDnK5ehIwl3grNhRXSZBXvjw&resourcekey=0-q6Z33dZKrau_ngUh0gJDeA) . The basic implementation is ready, but was waiting for a stable MediaPipe version of the app to submit. Pushing to M2 sounds good.
@sureshdagooglecom and @jiuqiant
Hi Suresh and Juiqiang, could one of you clarify what you mean by M2?
MediaPipe has become an industry standard for developers. I believe many would not need to wait for the app you refer to but would use the basic implementation immediately as part of the MediaPipe SDK.
Please PR it so the community could build or use the nightly built version (after PR merge) immediately and start testing it so we could iteratively feedback on the basic implementation.
@sureshdagooglecom Please support this feature request for https://github.com/google/mediapipe/issues/3421
@sureshdagooglecom
No access rights
@sureshdagooglecom
No access rights
+1
@kuaashish could you help address some of the questions here? Assuming that both @jiuqiant and @sureshdagooglecom are on a long holiday
+1 for this. It would really democratize the virtual character puppeteering market, allowing developers to build things we've not imagined yet
@brunodeangelis I am about to close this issue as our requests are being ignored.
I think 52 bs is not easily integrated with media pipe, but one can using a new model to regression these outputs with the input of 3d face pose.
I think 52 bs is not easily integrated with media pipe, but one can using a new model to regression these outputs with the input of 3d face pose.
Could you expand on this please? What type of model? Do you have an example?
any update on this guys? Is it in progress or dropped? Really looking forward to it
I'm also waiting for this feature to be released so much... Please update this issue!
Hope to get an update soon on the support of this technology :-)
Hi @GeorgeS2019 , Blendshapes come from Xeno (https://drive.google.com/open?id=1f030M8gbXgJN-5-JhcPkDnK5ehIwl3grNhRXSZBXvjw&resourcekey=0-q6Z33dZKrau_ngUh0gJDeA) . The basic implementation is ready, but was waiting for a stable MediaPipe version of the app to submit. Pushing to M2 sounds good.
the link of the google drvie is invalid
你好@GeorgeS2019, Blendshapes 来自 Xeno ( https://drive.google.com/open?id=1f030M8gbXgJN-5-JhcPkDnK5ehIwl3grNhRXSZBXvjw&resourcekey=0-q6Z33dZKrau_ngUh0gJDeA )。基本实现已准备就绪,但正在等待应用程序的稳定 MediaPipe 版本提交。推到 M2 听起来不错。
google drvie 的链接无效
+1
Any update so far?
This feature can be revolutionary. Really wondering why it gets ignored.
It seems like they are working on this: https://github.com/google/mediapipe/commit/ba10ae8410ce431f593c95986a230add675e5ccd (this was 3 days ago) But the models aren't available and there's no info on how to use it.
+1
. The coefficient describing closure of the eyelids over the left eye.
. The coefficient describing movement of the left eyelids consistent with a downward gaze.
. The coefficient describing movement of the left eyelids consistent with a rightward gaze.
. The coefficient describing movement of the left eyelids consistent with a leftward gaze.
. The coefficient describing movement of the left eyelids consistent with an upward gaze.
. The coefficient describing contraction of the face around the left eye.
. The coefficient describing a widening of the eyelids around the left eye.
. The coefficient describing closure of the eyelids over the right eye.
. The coefficient describing movement of the right eyelids consistent with a downward gaze.
. The coefficient describing movement of the right eyelids consistent with a leftward gaze.
. The coefficient describing movement of the right eyelids consistent with a rightward gaze.
. The coefficient describing movement of the right eyelids consistent with an upward gaze.
. The coefficient describing contraction of the face around the right eye.
. The coefficient describing a widening of the eyelids around the right eye.
. The coefficient describing forward movement of the lower jaw.
. The coefficient describing leftward movement of the lower jaw.
. The coefficient describing rightward movement of the lower jaw.
. The coefficient describing an opening of the lower jaw.
. The coefficient describing closure of the lips independent of jaw position.
. The coefficient describing contraction of both lips into an open shape.
. The coefficient describing contraction and compression of both closed lips.
. The coefficient describing leftward movement of both lips together.
. The coefficient describing rightward movement of both lips together.
. The coefficient describing upward movement of the left corner of the mouth.
. The coefficient describing upward movement of the right corner of the mouth.
. The coefficient describing downward movement of the left corner of the mouth.
. The coefficient describing downward movement of the right corner of the mouth.
. The coefficient describing backward movement of the left corner of the mouth.
. The coefficient describing backward movement of the right corner of the mouth.
. The coefficient describing leftward movement of the left corner of the mouth.
. The coefficient describing rightward movement of the left corner of the mouth.
. The coefficient describing movement of the lower lip toward the inside of the mouth.
. The coefficient describing movement of the upper lip toward the inside of the mouth.
. The coefficient describing outward movement of the lower lip.
. The coefficient describing outward movement of the upper lip.
. The coefficient describing upward compression of the lower lip on the left side.
. The coefficient describing upward compression of the lower lip on the right side.
. The coefficient describing downward movement of the lower lip on the left side.
. The coefficient describing downward movement of the lower lip on the right side.
. The coefficient describing upward movement of the upper lip on the left side.
. The coefficient describing upward movement of the upper lip on the right side.
. The coefficient describing downward movement of the outer portion of the left eyebrow.
. The coefficient describing downward movement of the outer portion of the right eyebrow.
. The coefficient describing upward movement of the inner portion of both eyebrows.
. The coefficient describing upward movement of the outer portion of the left eyebrow.
. The coefficient describing upward movement of the outer portion of the right eyebrow.
. The coefficient describing outward movement of both cheeks.
. The coefficient describing upward movement of the cheek around and below the left eye.
. The coefficient describing upward movement of the cheek around and below the right eye.
. The coefficient describing a raising of the left side of the nose around the nostril.
. The coefficient describing a raising of the right
. The coefficient describing extension of the tongue.
static constexpr std::array<absl::string_view, 52> kBlendshapeNames = {
"_neutral",
"browDownLeft",
"browDownRight",
"browInnerUp",
"browOuterUpLeft",
"browOuterUpRight",
"cheekPuff",
"cheekSquintLeft",
"cheekSquintRight",
"eyeBlinkLeft",
"eyeBlinkRight",
"eyeLookDownLeft",
"eyeLookDownRight",
"eyeLookInLeft",
"eyeLookInRight",
"eyeLookOutLeft",
"eyeLookOutRight",
"eyeLookUpLeft",
"eyeLookUpRight",
"eyeSquintLeft",
"eyeSquintRight",
"eyeWideLeft",
"eyeWideRight",
"jawForward",
"jawLeft",
"jawOpen",
"jawRight",
"mouthClose",
"mouthDimpleLeft",
"mouthDimpleRight",
"mouthFrownLeft",
"mouthFrownRight",
"mouthFunnel",
"mouthLeft",
"mouthLowerDownLeft",
"mouthLowerDownRight",
"mouthPressLeft",
"mouthPressRight",
"mouthPucker",
"mouthRight",
"mouthRollLower",
"mouthRollUpper",
"mouthShrugLower",
"mouthShrugUpper",
"mouthSmileLeft",
"mouthSmileRight",
"mouthStretchLeft",
"mouthStretchRight",
"mouthUpperUpLeft",
"mouthUpperUpRight",
"noseSneerLeft",
"noseSneerRight"};
All 52 blendshape coefficients:
classification {
index: 0
score: 3.0398707e-05
label: "_neutral"
}
classification {
index: 1
score: 0.9075446
label: "browDownLeft"
}
classification {
index: 2
score: 0.88084334
label: "browDownRight"
}
classification {
index: 3
score: 8.574165e-05
label: "browInnerUp"
}
classification {
index: 4
score: 0.00025488975
label: "browOuterUpLeft"
}
classification {
index: 5
score: 0.0003774864
label: "browOuterUpRight"
}
classification {
index: 6
score: 0.0013324458
label: "cheekPuff"
}
classification {
index: 7
score: 3.3356114e-06
label: "cheekSquintLeft"
}
classification {
index: 8
score: 3.486574e-06
label: "cheekSquintRight"
}
classification {
index: 9
score: 0.23493542
label: "eyeBlinkLeft"
}
classification {
index: 10
score: 0.25156906
label: "eyeBlinkRight"
}
classification {
index: 11
score: 0.31935102
label: "eyeLookDownLeft"
}
classification {
index: 12
score: 0.3578726
label: "eyeLookDownRight"
}
classification {
index: 13
score: 0.040764388
label: "eyeLookInLeft"
}
classification {
index: 14
score: 0.10176937
label: "eyeLookInRight"
}
classification {
index: 15
score: 0.13951905
label: "eyeLookOutLeft"
}
classification {
index: 16
score: 0.08599486
label: "eyeLookOutRight"
}
classification {
index: 17
score: 0.04357939
label: "eyeLookUpLeft"
}
classification {
index: 18
score: 0.039808873
label: "eyeLookUpRight"
}
classification {
index: 19
score: 0.71037096
label: "eyeSquintLeft"
}
classification {
index: 20
score: 0.5613919
label: "eyeSquintRight"
}
classification {
index: 21
score: 0.0035762936
label: "eyeWideLeft"
}
classification {
index: 22
score: 0.0046946923
label: "eyeWideRight"
}
classification {
index: 23
score: 0.0015334645
label: "jawForward"
}
classification {
index: 24
score: 0.0015781156
label: "jawLeft"
}
classification {
index: 25
score: 0.1825222
label: "jawOpen"
}
classification {
index: 26
score: 0.00022096233
label: "jawRight"
}
classification {
index: 27
score: 0.04180384
label: "mouthClose"
}
classification {
index: 28
score: 0.03450827
label: "mouthDimpleLeft"
}
classification {
index: 29
score: 0.02150588
label: "mouthDimpleRight"
}
classification {
index: 30
score: 0.0004071877
label: "mouthFrownLeft"
}
classification {
index: 31
score: 0.00041903765
label: "mouthFrownRight"
}
classification {
index: 32
score: 0.006162456
label: "mouthFunnel"
}
classification {
index: 33
score: 0.0007518647
label: "mouthLeft"
}
classification {
index: 34
score: 0.020578554
label: "mouthLowerDownLeft"
}
classification {
index: 35
score: 0.06688861
label: "mouthLowerDownRight"
}
classification {
index: 36
score: 0.21384144
label: "mouthPressLeft"
}
classification {
index: 37
score: 0.12757207
label: "mouthPressRight"
}
classification {
index: 38
score: 0.0037631108
label: "mouthPucker"
}
classification {
index: 39
score: 0.0017571376
label: "mouthRight"
}
classification {
index: 40
score: 0.017397188
label: "mouthRollLower"
}
classification {
index: 41
score: 0.073125295
label: "mouthRollUpper"
}
classification {
index: 42
score: 0.009962141
label: "mouthShrugLower"
}
classification {
index: 43
score: 0.006196939
label: "mouthShrugUpper"
}
classification {
index: 44
score: 0.9412662
label: "mouthSmileLeft"
}
classification {
index: 45
score: 0.9020016
label: "mouthSmileRight"
}
classification {
index: 46
score: 0.054742802
label: "mouthStretchLeft"
}
classification {
index: 47
score: 0.080990806
label: "mouthStretchRight"
}
classification {
index: 48
score: 0.7372297
label: "mouthUpperUpLeft"
}
classification {
index: 49
score: 0.69999576
label: "mouthUpperUpRight"
}
classification {
index: 50
score: 3.5180938e-07
label: "noseSneerLeft"
}
classification {
index: 51
score: 8.41376e-06
label: "noseSneerRight"
}
To all users here, please encourage this project to support the latest 52 blendshapes. Do generously support the new feature request with emoji and the project with stars
The relevant commit was on 16th Feb
Will need to wait for the next release.
I also saw the commit about blendshape, but I don't know whether I can use it directly now
The example to use it, I believe is being developed, official date is sometimes in April 2023
That's great, thanks to the developers
@GeorgeS2019 If you want to try the model and can't wait for it to be ported to Unity you can download it from here face_blendshapes.tflite The easiest way to test it is on Python. You also can use it in Unity without converting to ONNX by using TF Lite to Unity Plugin
I need your support for this issue to allow more USERS interested of testing the blendshape in Game Engine doing it in faster iteration.
After close to 9 months of REQUEST and with MANY HERE supporting the IDEA <======.
I am closing this issue NOW!
https://www.phizmocap.dev/
After close to 9 months of REQUEST and with MANY HERE supporting the IDEA <======. I am closing this issue NOW! https://www.phizmocap.dev/
Are you using mediapipe for this project? I found it used from an archived package mocap4face
of facemoji
Consider supporting this issue
I have installed mediapipe=0.9.2.1 by pip to use this good blendshape estimation. If anyone knows how to move it, it would be helpful if you could tell me.
@GeorgeS2019 @fadiaburaid fa Thank you for your efforts to make this amazing feature possible! Do you have any python / javascript example to use this tflite model? It's seems that official guide is still in progress and the python example can't run
To Everyone, this is PART 2: Text To Speech to Facial BlendShapes, please support!!!!!!
@GeorgeS2019,
Could you let us know if we can mark issue as resolved and close? However, We are following up on another issue raised #4428. Thank you!!
@kuaashish Thank you for your support. I need it to present global crisis
@schmidt-sebastian @kuaashish Hi, researchers! Could you please give me some hint on the training details of this blendshape estimator? I have referred to your solution page but still couldn't get the point. How did you generate the ground-truth 3D blendshapes(52 coefficients) from 3D mesh reconstruction? Is it an optimization-based logic, where a rigged face's blendshapes are optimized to deform the canonical mesh towards the captured 3D mesh?
Or is it follows the framework in this paper except that body parts are removed and skinning functions are specified by the artists instead of being learnable.
Please make sure that this is a feature request.
Describe the feature and the current behavior/state:
Is there a plan to output the facial blendshapes that are compatible with the ARKit 52 blendshapes?
Will this change the current api? How?
This is an addition to the existing API.
Who will benefit with this feature?
This provides facial mocap for avatars that use the ARKit 52 blendshapes.
Please specify the use cases for this feature:
Currently, Users who are using industry standards e.g. Character Creator 3 limit themselves only facial mocap apps from Apple App Store
This feature request will democratize Facial Mocap apps to Android and make them available through Google Store.
Related discussion
Related community effort to address this unmet need for android phone users