Digihuman is a project which aims to automatically generate whole body pose animation + facial animation on 3D Character models based on the camera input.
This project is my B.Sc thesis of Computer Engineering at Amirkabir University of Technology(AUT).
DigiHuman is a system for bringing automation in animation generation on 3D virtual characters.
It uses Pose estimation and facial landmark generator models to create entire body and face animation on 3D virtual characters.
DigiHuman is developed with MediaPipe and Unity3D.
MediaPipe generates 3D landmarks for the human whole body and face, and Unity3D is used to render the final animation after processing the generated landmarks from MediaPipe. The diagram below, shows the whole architucture of the application.
Follow the instructions to run the program!
pip install mediapipe
pip install opencv-python
backend
directory and install other requirements:
pip install -r requirements.txt
backend/checkpoints/coco_pretrained/
.Install Unity3D and its requirements by the following guidelines(Skip 1-3 if Unity3D is already installed).
LTS
versions and a version higher than 2020.3.25f1
are recommended).In the Unity project setting, allow HTTP connections in the player setting.
.unitypackage
files and drag them to your project).[FFmpegOutBinaries package]: https://github.com/keijiro/FFmpegOutBinaries/releases
backend
directory with the following command:
python server.py
Assets\Scenes\MainScene.unity
You can add your characters to the project! Characters should have a standard Humanoid rig to show kinematic animations. For rendering face animations, characters should have a facial rig(Blendmesh). Follow these steps to add your character:
CharacterChooser/CharacterSlideshow/Parent
object in Unity main Scene like the image belowBlendShapeController
and QualityData
components to the character object in the scene(which is dragged inside the Parent object in the last step).BlendShapeController
values
SkinnedMeshRenderer
component to BlendShapeController
component.SkinnedMeshRenderer
and set those numbers in BlendShapes
field inside BlendShapeController
(for specifying each blendshape value to the BlendShapeController
component so the animation would be shown on character face by modification on these blnedShape values)CharacterSlideshow
Object on CharacterChooser/CharacterSlideshow
path inside the scene hierarchy, then add a new dragged character to the nodes
property(all characters should be referenced inside nodes
).Application License: GPL-3.0 license Non-commercial use only. If you distribute or communicate copies of the modified or unmodified Program, or any portion thereof, you must provide appropriate credit to Danial Kordmodanlou as the original author of the Program. This attribution should be included in any location where the Program is used or displayed.
@inproceedings{park2019SPADE,
title={Semantic Image Synthesis with Spatially-Adaptive Normalization},
author={Park, Taesung and Liu, Ming-Yu and Wang, Ting-Chun and Zhu, Jun-Yan},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2019}
}
Danial Kordmodanlou - kordmodanloo@gmail.com
Website : danial-kord.github.io
Project Link: github.com/Danial-Kord/DigiHuman
Telegram ID: @Danial_km