Adaption and Unification of AlphaPose and MotionBert.
Please uses Python 3.9 since only it is compatible with Blender version (see below).
# 0.1 Create a virtual environment
python -m pip install virtualenv
python -m virtualenv venv
# 0.2 Activate Virtual Environment (Windows 10)
.\venv\Scripts\activate
## If you're not on Windows, there should be some tutorial
## helping you to activate venv on your machine. It
## would be better to refer to them.
# 1. Install PyTorch
pip install torch torchvision
# 2. Install other stuff
pip install cython
python setup.py build develop --user
python -m pip install -r requirements.txt
python -m pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
pip uninstall cython_bbox
pip install git+https://github.com/valentin-fngr/cython_bbox.git # to fix np.float deprecated issue
Install Blender 2.93, and add blender
to environment path.
For example, if your blender executable is under C:/Users/ABC/Blender 2.93/path/to/blender.exe
,
add C:/Users/ABC/Blender 2.93/path/to/
to your environment path.
Please verify your Blender installation by running the following in your command line interface:
# Bring up Blender Interface
blender
# Gives you Python Interactive Console with Headless Blender.
blender -b --python-console
# You should see something like
# Python 3.9.X (default, Mar 1 2021, 08:18:55) [XYZ] on ABCOS
# then press Ctrl+Z, followed by Enter key to exit the interactive console.
If all are well, you can proceed to downloading models:
Download Yolo detection model from google drive
https://drive.google.com/open?id=1D47msNOOiJKvPOXlnpyzdKA3k6E97NTC
put it in detector/yolo/data/
Download 2D pose detection model from google drive
https://drive.google.com/file/d/1S-ROA28de-1zvLv-hVfPFJ5tFBYOSITb/view
put it in pretrained_models/
Download 3D pose lifting model from google drive
https://drive.google.com/file/d/1Om_hULE-6JLtOtZuBwjlD1xzzH6at0sV/view
put it in checkpoint/pose3d/FT_MB_lite_MB_ft_h36m_global_lite/
Under activated virtual environment, run
python webserver.py
Follow the instruction from the HTML page.
Some of the results below:
Please see ./blender_files/instructions.txt
for how to add your own models.
Once you add it correctly, rerun python webserver.py
again,
the program should be able to detect your model right away.
I'm not an expert on Blender nor webpage design, so the website is a bit crappy, but what I'm trying to share is the easy usage of 2D image to 3D Pose and realize it using Blender-Python scripting. If you want to help me improve the webpage design, feel free to contact me! I appreciate any kind of support from the community.
The output from MotionBert is of shape [1, N, 17, 3], N stands for number of images. This means there is 17 3-dimensional points.
17 Keypoints format used in MotionBert:
which are
'root', 'RHip', 'RKnee', 'RAnkle', 'LHip', 'LKnee', 'LAnkle', 'torso', 'neck', 'nose', 'head', 'LShoulder', 'LElbow', 'LWrist', 'RShoulder', 'RElbow', 'RWrist'
(This is the MC from my favorite video game!)
Since I didn't make this 3D model myself, there are some issue when rigging the character, you can see there are some strands of hair and the buttons from his clothes floating in the air, I didn't spend time fixing this because I think this is a minor visual issue.
3D Model [Kiryu Kazuma] Credit to
"Yakuza 5 - Kazuma Kiryu (utility jacket)" (https://skfb.ly/oHNUL) by We_Will_Meet_Again is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).