V-Sekai-fire / walk-the-dog

A vector-quantized periodic autoencoder (VQ-PAE) for motion alignment across different morphologies with no supervision [SIGGRAPH 2024]
https://peizhuoli.github.io/walkthedog/
0 stars 0 forks source link

WalkTheDog: Cross-Morphology Motion Alignment via Phase Manifolds

Python Pytorch

This repository provides the implementation for our vector-quantized periodic autoencoder. It learns a disconnected 1D phase manifold that aligns motion regardless of morphologies while requiring no paired data or any joint correspondence. It is based on our work WalkTheDog: Cross-Morphology Motion Alignment via Phase Manifolds that is published in SIGGRAPH 2024.

For the Unity project for visualization, a separate repository is provided here.

Prerequisites

This code has been tested under Ubuntu 20.04. Before starting, please configure your Anaconda environment by

conda env create -f environment.yml
conda activate walk-the-dog

Alternatively, you may install the following packages (and their dependencies) manually:

Quick Start

We provide pre-trained models for the Human-Loco dataset and the Dog dataset. To run the demo, please download the pre-processed dataset here and extract it under ./Datasets directory from the root folder of the repository. In addition, please also download the pre-trained model here and put it directly under the root folder of the repository.

To run the demo, please execute the following command:

python test_vq.py --save=./pre-trained/human-dog

The learned phase manifolds, the average pose prediction neural networks, and the codebook will be exported as Manifold_*_final.npz file, .onnx file, and VQ.npz respectively.

Training from Scratch

To learn more about how to process data with Unity, please refer to our Unity repository here.

After obtaining the pre-processed data, you can train the model by executing the following command:

python train_vq.py --load=dataset1path,dataset2path,dataset3path --save=./path-to-save

The datasets should be separated by commas and stored in the ./Datasets folder. Note that the path should exclude ./Datasets prefix in the command. You can put as many datasets as you want. The trained model will be saved in the ./path-to-save folder.

After training, you can generate the files needed for the Unity side with the test_vq.py script.

Motion Matching

We provide an offline implementation of our frequency-scaled motion matching in the Python module. To reproduce the results in the paper, please execute the following command after running the test_vq.py script:

python offline_motion_matching.py --preset_name=human2dog --target_id=3

The target_id parameter specifies the index of the motion sequence in the dataset, which is the same as in the Motion Editor in our Unity module.

The motion matching result will be saved into ./results/motion_matching/replay_sequence.npz.

To visualize the motion matching result, please refer to the Unity repository here.

Acknowledgments

The code is adapted from the DeepPhase project under AI4Animation by @sebastianstarke.

The code for the class VectorQuantizer is adapted from CVQ-VAE by @lyndonzheng.