Open iPsych opened 2 years ago
The current code is only working really good when looking straight into the camera, it's still missing the calibration feature to fit on different faces. The only thing you can currently tune to your face are the values in the blendshape_config file (to do that, find the line in the blendshape_calculator which is using the desired blendshape in the remap_blendshape function ( like
mouth_smile_left = 1 - \
self._remap_blendshape(FaceBlendShape.MouthSmileLeft, smile_left)
And print the input (smile_left), then look at the min and max values of that variable and put them into the config file FaceBlendShape.MouthSmileLeft : (-0.25, 0.0)
.
That's the way I found those vlaues while programming it.
Hi @JimWest, great work and thank you. Any plans on when to implement automatic calibration for different types of faces?
Cheers!
Hi @JimWest, great work and thank you. Any plans on when to implement automatic calibration for different types of faces?
Cheers!
Hi thanks. Can't tell you yet, haven't found much time to work in my freetime on this since the release. But its still on my TODO list to improve this in the future.
It's amazing work!
I found that the LLink Face with Metahuman-generated human data doesn't 'sync' or 'calibrated' properly. Below is the captured expression from video, the smiling person. mediapipe tracked it quite accurately.
The metahuman response to MeFaMo-transfered data.
Is there any parameter or step I should check or need to improve?
In your demo video, your smile is quite well-synced. https://www.reddit.com/r/unrealengine/comments/r8wbe3/my_livelink_facetracking_without_an_apple_device/
Does the fresh-exported metahuman data need blueprint modification following below link you mentioned? https://docs.unrealengine.com/4.27/en-US/AnimatingObjects/SkeletalMeshAnimation/FacialRecordingiPhone/
Another Strange example. Just image input, and the result.