[Project Page] | [Paper] | [Video]
Official Implementation of the paper FLAME: Free-form Language-based Motion Synthesis & Editing (AAAI'23)
This project is tested on the following environment. Please install them on your running environment.
You may need following packages to run this repo.
apt install libboost-dev libglfw3-dev libgles2-mesa-dev freeglut3-dev libosmesa6-dev libgl1-mesa-glx
:exclamation: We cannot directly provide original data files to abide by the license.
You may need SMPL and DMPL to preprocess motion data. Please refer to AMASS for this. smpl_model
and dmpl_model
should be located in the project root directory.
You may need the following packages for visualization.
Create a virtual environment and activate it.
conda create -n flame python=3.8
conda activate flame
Install required packages. Recommend to install corresponding version of PyTorch and PyTorch3D first.
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable" # PyTorch3D
Install VPoser and PyOpenGL and PyOpenGL_Accelerate from their installation guide.
Install other required packages.
pip install -r requirements.txt
Preprocess AMASS dataset.
./scripts/unzip_dataset.sh
This will unzip downloaded AMASS data into data/amass_smplhg. You can also unzip data manually.
Prepare HumanML3D dataset.
python scripts/prepare_humanml3d.py
Prepare BABEL dataset.
python scripts/prepare_babel_dataset.py
You can train your own model by running the following command.
Training configs can be set by config files in configs/
or by command line arguments (hydra format).
python train.py
Testing takes a long time, since it needs to generate all samples in testset. You need to run test.py
with proper config settings at configs/test.yaml
. Then, you can run eval_util.py
to evaluate the results.
Set your sampling config at configs/t2m_sample.yaml
. Sampled results will be saved at outputs/
. You can export json
output to visualize in Unity Engine. Exported json
includes the root joint's position and rotation of all other joints in quaternion format.
python t2m_sample.py
Set your text-to-motion editing config at configs/edit_motion.yaml
. You can choose a motion to be edited, editing joints, and text prompt. Sampled results will be saved at outputs/
.
python edit_motion.py
@article{kim2022flame,
title={Flame: Free-form language-based motion synthesis \& editing},
author={Kim, Jihoon and Kim, Jiseob and Choi, Sungjoon},
journal={arXiv preprint arXiv:2209.00349},
year={2022}
}
Copyright (c) 2022 Korea University and Kakao Brain Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0. (see LICENSE for details)