ButzYung / SystemAnimatorOnline

XR Animator, AI-based Full Body Motion Capture and Extended Reality (XR) solution, powered by System Animator Online
https://sao.animetheme.com/XR_Animator.html
805 stars 68 forks source link

Get VMC protocol data from mediapipe results #44

Open xieleo5 opened 9 months ago

xieleo5 commented 9 months ago

I've tried to read some of the code in this repo. The MMD_SA.js seems responsible for sending the VMC data and the data is read from the VRM model. So I guess the whole pipeline of ths app is:

  1. Get results from the mediapipe (478 face landmarks, 21*2 hand landmarks, and 31 pose landmarks)
  2. Translate landmarks to control parameters. (I guess it's finished in facemesh_lib.js and mocap_lib_module.js)
  3. Apply the results to the model. (I failed to find the code for this)
  4. Read the model's state and send data to VMC protocol.

Correct me if any step is wrong. I'm trying to figure out how the app control the 3D model using the landmarks from mediapipe. Also, is it possible to directly generate data for VMC protocol so that we can skip the movement of the model?

ButzYung commented 9 months ago

VMC data is sent at the very late stage of the pipeline, after all the 3D calculations have been finished and just before the 3D avatar and scene is about to be rendered to the screen. It merely sends out the bone positions and rotations regardless of whether MediaPipe is used or not. You can choose the hide the 3D avatar and scene under the VMC protocol option to save some CPU/GPU usage.