mmmmmm44 / VTuber-Python-Unity

An Implementation of VTuber (Both 3D and Live2D) using Python and Unity. Providing face movement tracking, eye blinking detection, iris detection and tracking and mouth movement tracking using CPU only.
MIT License
487 stars 83 forks source link

3D custom model question #15

Open positive666 opened 2 years ago

positive666 commented 2 years ago

if I design a 3D model myself, do I need to follow the topology of the face landmark of mediapipe?

mmmmmm44 commented 2 years ago

No.

Mediapipe detects your face from the camera and then calculate the landmarks. Then we make use of the pose_estimator.py script and functions from facial_features.py to calculate the head movement, eye aspect ratios and mouth aspect ratios to control the avatar.

After you import the model to Unity, find the corresponding objects in your character and try to write a program in Unity to translate the ratios into your character's movement, like how UntiyChan model does.

Edit: Have no experience in modelling but in my understanding, modelling and mediapipe are two different stuffs.

positive666 commented 2 years ago

Okay, thanks for your reply, if I'm going to do a full-body gesture, I should also need an initial 3D coordinate similar to a Model .txt, do you know how to get this?

mmmmmm44 commented 2 years ago

Sorry for really late reply as I am busying with a summer internship.

I have no experience in full-body gesture detection, so I don't know whether a txt file like model.txt is required.

Although mediapipe has an API namely Holistic, the pose estimation may get unsatisfactory if part of your body is covered, or the camera is viewing from side-view. Unfortunately, this requires more sophisticated solutions, such as HRNet, AlphaPose or Openpose, which requires decent to high-ended graphic cards to run, and more coding knowledge.

I think you should look for more professional equipments like Valve Index or HTC vive (some VR equipments) if you aim for a smooth experience.

Thank you. mmmmmm44