Yi-Shi94 / AMDM

Interactive Character Control with Auto-Regressive Motion Diffusion Models
https://yi-shi94.github.io/amdm_page/
BSD 3-Clause "New" or "Revised" License
97 stars 2 forks source link
autoregressive-models character-animation computer-graphics diffusion-models generative-models interactive-visualizations motion-synthesis pytorch realtime-planning siggraph2024

Interactive Character Control with Auto-Regressive Motion Diffusion Models

Yi Shi, Jingbo Wang, [Xuekun Jiang](), [Bingkun Lin](), Bo Dai, Xue Bin Peng

Links

page | paper | video | poster | slides

Implementation of Auto-regressive Motion Diffusion Model (A-MDM)

We developed a PyTorch framework for kinematic-based auto-regressive motion generation models, supporting both training and inference. Our framework also includes implementations for real-time inpainting and reinforcement learning-based interactive control. If you have any questions about A-MDM, please feel free to reach out via ISSUE or email.

Update

  1. July 28 2024, framework released.
  2. Aug 24 2024, LAFAN1 15step checkpoint released.
  3. Sep 5 2024, 100STYLE 25step checkpoint released.
  4. Stay tuned for support for more dataset.

Checkpoints

Download, unzip and merge with your output directory.

LAFAN1_15step 100STYLE_25step

Dataset Preparation

LaFAN1:

Download and extract under ./data/ directory. BEWARE: We didn't include files with a prefix of 'obstacle' in our experiments.

100STYLE:

Download and extract under ./data/ directory.

Arbitrary BVH dataset:

Download and extract under ./data/ directory. Create a yaml config file in ./config/model/,

AMASS:

Follow the procedure described in the repo of HuMoR

HumanML3D:

Follow the procedure described in the repo of HumanML3D

Sanity Check:

python run_sanity_data.py

Base Model

Training

python run_base.py --arg_file args/amdm_DATASET_train.txt

or

python run_base.py
--model_config config/model/amdm_lafan1.yaml
--log_file output/base/amdm_lafan1/log.txt

--int_output_dir output/base/amdm_lafan1/
--out_model_file output/base/amdm_lafan1/model_param.pth

--mode train
--master_port 0
--rand_seed 122

Training time visualization is saved in --int_output_dir

Inference

python run_env.py --arg_file args/RP_amdm_DATASET.txt

Inpainting

python run_env.py --arg_file args/PI_amdm_DATASET.txt

High-Level Controller

Training

python run_env.py --arg_file args/ENV_train_amdm_DATASET.txt

Inference

python run_env.py --arg_file args/ENV_test_amdm_DATASET.txt

Installation

conda create -n amdm python=3.7
conda activate amdm
pip install -r requirement.txt
mkdir output && mkdir data

For users wish to create more variants given a mocap dataset

  1. Train the base model
  2. Follow the main function in gen_base_bvh.py, you can generate diverse motion given any starting pose:

Acknowledgement

Part of the RL modules utilized in our framework are based on the existing codebase of MotionVAE, please cite their work if you find using RL to guide autoregressive motion generative models helpful to your research.

BibTex

@article{
        shi2024amdm,
        author = {Shi, Yi and Wang, Jingbo and Jiang, Xuekun and Lin, Bingkun and Dai, Bo and Peng, Xue Bin},
        title = {Interactive Character Control with Auto-Regressive Motion Diffusion Models},
        year = {2024},
        issue_date = {August 2024},
        publisher = {Association for Computing Machinery},
        address = {New York, NY, USA},
        volume = {43},
        journal = {ACM Trans. Graph.},
        month = {jul},
        keywords = {motion synthesis, diffusion model, reinforcement learning}
      }