NeuralDome & HOIM3 Dataset Toolbox
Welcome to the repository for the Dataset Toolbox, which facilitates downloading, processing, and visualizing the Dataset. This toolbox supports our publication:
NeuralDome |
HOIM3 |
NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions (CVPR2023) |
HOI-M3: Capture Multiple Humans and Objects Interaction within Contextual Environment (CVPR2024 Highlight) |
We construct a 76-view dome to acquire a complex human object interaction dataset, named HODome,. |
HOI-M3 is a large-scale dataset for modeling the interactions of multiple humans and multiple objects. |
[Paper] [Video] [Project Page] |
[Paper] [Video] [Project Page] |
[Hodome Dataset] |
[HOIM3 Dataset] |
|
|
π©Updates
- July 1, 2024: [HOIM3] Due to the large size of the mask, we are currently only uploading the annotated mask for the 3rd view!!
- June 30, 2024: Important! All the object's rotations were mistakenly saved as the transpose of a rotation matrix.
- June 12, 2024: [HOIM3] Currently uploading the HOIM3 dataset to Google Cloud Drive.
- Jan. 05, 2024: [Hodome] Upload of Hodome is now complete!
πSetup and download
Setting Up Your Environment
To get started, set up your environment as follows:
```bash
# Create a conda virtual environment
conda create -n NeuralDome python=3.8 pytorch=1.11 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate NeuralDome
## Install PyTorch3D
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"
# Install other requirements
pip install -r requirements.txt
```
Preparing the Data
The complete dataset features 76-view RGB videos along with corresponding masks, mocap data, geometry, and scanned object templates. Download and extract the dataset from [this link](https://drive.google.com/drive/folders/1-QHvcwa71Wk7rdfnQrOyInqK-SWK6lRA):
```bash
for file in *.tar; do tar -xf "$file"; done
```
## Data Structure Overview
The dataset is organized as follows:
```
ββ HODome
ββ images
ββ Seq_Name
ββ 0
ββ 000000.jpg
ββ 000001.jpg
ββ 000003.jpg
...
...
ββ videos
ββ Seq_Name
ββ data1.mp4
ββ data2.mp4
...
ββ data76.mp4
ββ mocap
ββ Seq_Name
ββ keypoints2d
ββ keypoints3d
ββ object
ββ smpl
ββ mask
ββ Seq_Name
ββ homask
ββ hmask
ββ omask
ββ calibration
ββ 20221018
...
ββ dataset_information.json
ββ startframe.json
...
```
## Extracting Images from Videos
Since the image files are extremely large, we have not uploaded them. Please run the following scripts to extract the image files from the provided videos.
```bash
python ./scripts/video2image.py
```
π Visualization Toolkit
Using Pytorch3D:
Our `hodome_visualization.py` script showcases how to access the diverse annotations in our dataset. It uses the following command-line arguments:
- `--root_path`: Directory containing the dataset.
- `--seq_name`: Sequence name to process.
- `--resolution`: Output image resolution.
- `--output_path`: Where to save rendered images.
Ensure your environment and data are properly set up before executing the script. Here's an example command:
```bash
## Hodome
python ./scripts/hodome_visualization.py --root_path "/path/to/your/data" --seq_name "subject01_baseball" --resolution 720 --output_path "/path/to/your/output"
## HOI-M3
python ./scripts/hoim3_visualization.py --root_path "/path/to/your/data" --seq_name "subject01_baseball" --resolution 720 --output_path "/path/to/your/output --vis_view 0"
```
Using Blender:
Please refer to [render.md](docs/render.md)
πCitation
If you find our toolbox or dataset useful for your research, please consider citing our paper:
@inproceedings{
zhang2023neuraldome,
title={NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions},
author={Juze Zhang and Haimin Luo and Hongdi Yang and Xinru Xu and Qianyang Wu and Ye Shi and Jingyi Yu and Lan Xu and Jingya Wang},
booktitle={CVPR},
year={2023},
}
@inproceedings{
zhang2024hoi,
title={HOI-M3: Capture Multiple Humans and Objects Interaction within Contextual Environment},
author={Zhang, Juze and Zhang, Jingyan and Song, Zining and Shi, Zhanhe and Zhao, Chengfeng and Shi, Ye and Yu, Jingyi and Xu, Lan and Wang, Jingya},
booktitle={CVPR},
year={2024}
}