Silverster98 / HUMANISE

Official implementation of the NeurIPS22 paper "HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes"
https://silverster98.github.io/HUMANISE/
MIT License
124 stars 7 forks source link
3d-scene-understanding deep-learning motion-generation

HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

Paper arXiv Paper PDF Project Page Dataset

Zan Wang, Yixin Chen, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang

This repository is an official implementation of "HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes".

In this work, we propose a large-scale and semantic-rich human-scene interaction dataset, HUMANISE. It has language description for each human-scene interaction. HUMANISE enables a new task: language-conditioned human motion generation in 3D scenes.

Paper | arXiv | Project Page | Data

Update

Abstract

Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characteristics of the existing datasets on Human-Scene Interaction(HSI); they only have limited scale/quality and lack semantics. To fill in the gap, we propose a large-scale and semantic-rich synthetic HSI dataset, denoted as HUMANISE, by aligning the captured human motion sequences with various 3D indoor scenes. We automatically annotate the aligned motions with language descriptions that depict the action and the unique interacting objects in the scene; e.g., sit on the armchair near the desk. HUMANISE thus enables a new generation task, language-conditioned human motion generation in 3D scenes. The proposed task is challenging as it requires joint modeling of the 3D scene, human motion, and natural language. To tackle this task, we present a novel scene-and-language conditioned generative model that can produce 3D human motions of the desirable action interacting with the specified objects. Our experiments demonstrate that our model generates diverse and semantically consistent human motions in 3D scenes.

Preparation

1. Environment Setup

Notes: we run our code with pytorch 1.10 and cuda11.3.

2. Data Preparation

  1. ScanNet V2 Dataset

    Remember to change the dataset folder configuration in utils/configuration.py.

  2. Our pre-synthesized data, or you can generate your own data with our pipeline, see HUMANISE Synthesis for more details.

  3. SMPLX v1.1

HUMANISE Dataset

1. Synthesis

See HUMANISE Synthesis for more details.

2. Visualization

For HUMANISE dataset visualization, we provide rendering script visualize_dataset.py which will render an animation video with top-down view. The result will be saved in ./tmp/.

python visualize_dataset.py --pkl ${PKL} --index ${index} --vis
# python visualize_dataset.py --pkl your_path/lie/scene0000_001810_c71dc702-1f1d-4381-895c-f07e9a10876b/anno.pkl --index 0 --vis

Notes: --vis will render the static human-scene interaction with trimesh on screen.

PYOPENGL_PLATFORM=egl python visualize_dataset.py --pkl ${PKL} --index ${index}
# PYOPENGL_PLATFORM=egl python visualize_dataset.py --pkl your_path/lie/scene0000_001810_c71dc702-1f1d-4381-895c-f07e9a10876b/anno.pkl --index 0

See more information about the data format.

Our Model

Preprocess ScanNet Scenes

Following link to preprocess the ScanNet scenes; then change the preprocess_scene_folder configuration in utils/configuration.py.

Action-Specific Model

Action-Agnostic Model

Pretrained Models

You can use our pretrained models. (In the checkpoints folder)

STAMP Pretrained Models
POINTTRANS_C_32768 scene model (point transformer)
20220829_194320 action-specific model (walk)
20220830_203617 action-specific model (sit)
20220830_203832 action-specific model (stand up)
20220830_204043 action-specific model (lie)
20220831_153356 action-agnostic model

Put the downloaded checkpoints into outputs/ folder as following:

-| model/
-| outputs/
---| POINTTRANS_C_32768/
---| 20220829_194320/
---| ...
-| scripts/
-| ...

Citation

If you find our project useful, please consider citing us:

@inproceedings{wang2022humanise,
  title={HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes},
  author={Wang, Zan and Chen, Yixin and Liu, Tengyu and Zhu, Yixin and Liang, Wei and Huang, Siyuan},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2022}
}

Acknowledgements

Some codes are borrowed from PSI-release, point-transformer, Pointnet2.ScanNet, and YouRefIt_ERU.

License

Our code and data are released under the MIT license. The following datasets are used in our project and are subject to their respective licenses: