Open fire opened 1 month ago
https://github.com/PeizhuoLi/walk-the-dog/
python train_vq.py --load=dataset1path,dataset2path,dataset3path --save=./path-to-save
https://github.com/PeizhuoLi/walk-the-dog-unity
Pre-process Data
Use AI4Animation -> Importer -> BVH Importer in the menu bar for BVH files and FBX Importer for FBX files.
Duplicate the scene Assets/Projects/DeepPhase/Demos/Biped/MotionCapture. In the Inspector of Editor, put the path for imported data in Editor Manager in the Inspector of MotionEditor and hit Import.
Use AI4Animation -> Tools -> Pre Process to calculate the root coordinate.
Use AI4Animation -> Tools -> Data Exporter (Async) to export the pre-processed data.
For Editor, choose the MotionEditor containing all the data you need.
For Exporting Mode, use Velocities,Positions,Rotations
Check Use Butterworth Velocity Only option.
You can use the default settings for the rest of the options.
CC-BY-4.0 The 100STYLE Dataset - Ian Mason https://data.niaid.nih.gov/resources?id=zenodo_8127870.
This dataset contains over four million frames of stylized motion capture data. Both original bvh motion capture files (100STYLE.zip) and processed data (100Style-Labelled-Data.zip) are provided. The data is labelled using the hybrid approach to local phases described in the paper alongside the extraction of motion features such as joint positions, rotations and velocities. InputLabels.txt and OutputLabels.txt describe each of the feature dimensions extracted.
Although size 512 improves on expressiveness, it fails to create sufficient overlapping between datasets. Thus, we choose |A| = 32 for the human-dog setting and |A| = 64 for the stylized setting in our experiments according to the results.
https://www.kaggle.com/datasets/dasmehdixtr/berkeley-multimodal-human-action-database
The Berkeley Multimodal Human Action Database (MHAD) contains 11 actions performed by 7 male and 5 female subjects in the range 23-30 years of age except for one elderly subject.
Copyright (c) 2013, Regents of the University of California
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This data is free for use in research projects. You may include this data in commercially-sold products, but you may not resell this data directly, even in converted form. If you publish results obtained using this data, we would appreciate it if you would send the citation to your published paper to jkh+mocap@cs.cmu.edu, and also would add this text to your acknowledgments section: The data used in this project was obtained from mocap.cs.cmu.edu. The database was created with funding from NSF EIA-0196217.
https://drive.google.com/drive/folders/1_2jbZK48Li6sm1duNJnR_eyQjVdJQDoU
Note: most datasets lack hands and face.
Idea: convert blendshapes to face poses
Idea: Record personal motion poses in vrsns
Idea: Record personal motion poses with playing a vr music game
python test_vq.py --save=./pre-trained/human-dog
python offline_motion_matching.py --preset_name=human2dog --target_id=3
The target_id parameter specifies the index of the motion sequence in the dataset, which is the same as in the Motion Editor in our Unity module.
Problem. We need to characterize an unknown a-pose skeleton which respects constraints.
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
eval "$(/home/fire/miniconda3/bin/conda shell.bash hook)"
conda env create -f environment.yml
conda activate walk-the-dog
python test_vq.py --save=./pre-trained/human-dog
python offline_motion_matching.py --preset_name=human2dog --target_id=3
Try partial data training:
python train_vq.py --load=Dataset-human-loco-gen2 --save=pre-trained/human-loco-gen2
python offline_motion_matching.py --preset_name=human-loco-gen2 --target_id=3
Problem. To characterize the transferred skeleton it must have animations.
Use AI4Animation -> Importer -> BVH Importer in the menu bar for BVH files and FBX Importer for FBX files.
Duplicate the scene Assets/Projects/DeepPhase/Demos/Biped/MotionCapture. In the Inspector of Editor, put the path for imported data in Editor Manager in the Inspector of MotionEditor and hit Import.
Use AI4Animation -> Tools -> Pre Process to calculate the root coordinate.
https://accad.osu.edu/research/motion-lab/mocap-system-and-data
Female 1 (81 files): bvh | pdf
Open Motion Project by ACCAD/The Ohio State University is licensed under a Creative Commons Attribution 3.0 Unported License.
Export Data for Training
Use AI4Animation -> Tools -> Data Exporter (Async) to export the pre-processed data.
For Editor, choose the MotionEditor containing all the data you need.
For Exporting Mode, use Velocities,Positions,Rotations
Check Use Butterworth Velocity Only option.
You can use the default settings for the rest of the options.
Idea: Large scale re-targeting existing datasets to "Godot Engine" humanoid profile
python test_vq.py --save=./pre-trained/
python offline_motion_matching.py --preset_name=human2dog --target_id=3 # Swapping human2dog swaps input with output
Idea: Useful for style retargeting difficult motions? Since the process is slow. Maybe the online one is faster.
Idea: Add finger motion and face motion to existing skeletons.
python offline_motion_matching.py --target_id=3 --path4manifold=pre-trained/human-loco-female1-alpha1/ --input_idx=1 --output_idx=0
Idea: Creating animations for new skeletons, especially non-humanoid ones like cats, is challenging due to limited datasets. We can use the "Walk-the-Dog" algorithm to transfer animations from a large humanoid dataset to a new cat skeleton with a small dataset. This method addresses the challenge of sparse datasets and provides a scalable solution for generating high-quality animations efficiently.
Noticed all animations must be taught to the dataset? Need to verify.
fire@DESKTOP-KEAGFB5:~$ eval "$(/home/fire/miniconda3/bin/conda shell.bash hook)"
(base) fire@DESKTOP-KEAGFB5:~$ conda activate walk-the-dog
(walk-the-dog) fire@DESKTOP-KEAGFB5:/mnt/c/Users/ernes/Downloads/walk-the-dog$ python train_vq.py --load=Dataset-human-loco-gen2,Dataset-dog-gen2,Dataset-osu-female1,Dataset-tpose --save=pre-trained/human-dog-female-tpose
(walk-the-dog) fire@DESKTOP-KEAGFB5:/mnt/c/Users/ernes/Downloads/walk-the-dog$ python test_vq.py --save=./pre-trained/human-dog-female-tpose
python offline_motion_matching.py --preset_name=human2dog --target_id=3 --path4manifold=pre-trained/human-dog-female-tpose/
Back out tpose dataset.
Idea: Use motion gpt to create snake animation -- required for transitions
Transfer from human to do seems to work.
Good for cyclic animations