Sirui-Xu / InterDiff

[ICCV 2023] Official PyTorch implementation of the paper "InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion"
https://sirui-xu.github.io/InterDiff
MIT License
229 stars 9 forks source link
3d-human-pose 6d deep-learning diffusion diffusion-models generative-ai generative-model human-motion-prediction human-object-interaction human-scene-interaction motion-prediction object-pose

InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion

Sirui XuZhengyuan LiYu-Xiong Wang*Liang-Yan Gui*
University of Illinois Urbana-Champaign
ICCV 2023

๐Ÿ  About

This paper addresses a novel task of anticipating 3D human-object interactions (HOIs). Most existing research on HOI synthesis lacks comprehensive whole-body interactions with dynamic objects, e.g., often limited to manipulating small or static objects. Our task is significantly more challenging, as it requires modeling dynamic objects with various shapes, capturing whole-body motion, and ensuring physically valid interactions. To this end, we propose InterDiff, a framework comprising two key steps: (i) interaction diffusion, where we leverage a diffusion model to encode the distribution of future human-object interactions; (ii) interaction correction, where we introduce a physics-informed predictor to correct denoised HOIs in a diffusion step. Our key insight is to inject prior knowledge that the interactions under reference with respect to contact points follow a simple pattern and are easily predictable. Experiments on multiple human-object interaction datasets demonstrate the effectiveness of our method for this task, capable of producing realistic, vivid, and remarkably long-term 3D HOI predictions.

๐Ÿ“– Implementation

To create the environment, you can check and build according to the requirement file requirements.txt, which is based on Python 3.7.

[!NOTE] For specific packages such as psbody-mesh and human-body-prior, you may need to build from their sources.

You may also build from a detailed requirement file based on Python 3.8, which might contain redundancies,

conda env create -f environment.yml

For more information about the implementation, see interdiff/README.md.

๐Ÿ“น Demo

๐Ÿ”ฅ News

๐Ÿ“ TODO List

๐Ÿ” Overview

๐Ÿ’ก Key Insight

We present HOI sequences (left), object motions (middle), and objects relative to the contacts after coordinate transformations (right). Our key insight is to inject coordinate transformations into a diffusion model, as the relative motion shows simpler patterns that are easier to predict, e.g., being almost stationary (top), or rotating around a fixed axis (bottom).

๐Ÿ”— Citation

If you find our work helpful, please cite:

@inproceedings{
   xu2023interdiff,
   title={{InterDiff}: Generating 3D Human-Object Interactions with Physics-Informed Diffusion},
   author={Xu, Sirui and Li, Zhengyuan and Wang, Yu-Xiong and Gui, Liang-Yan},
   booktitle={ICCV},
   year={2023},
}

๐Ÿ‘ Acknowledgements

๐Ÿ“š License

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, Hugging Face, Hydra, and uses datasets which each have their own respective licenses that must also be followed.

๐ŸŒŸ Star History

Star History Chart