Jiayi Liu, [Hou In Ivan Tam](), Ali Mahdavi-Amiri, Manolis Savva
CVPR 2024
Page | Paper | Data (alternative link for data: OneDrive)
We recommend the use of miniconda to manage system dependencies. The environment was tested on Ubuntu 20.04.4 LTS.
# Create a conda environment
conda env create -n cage python=3.10
conda activate cage
# Install pytorch
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
# Install PyGraphviz
conda install --channel conda-forge pygraphviz
# Install other packages
pip install -r requirements.txt
# Install PyTorch3D (not required for training):
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d
We share the training data (here~101MB) preprocessed from PartNet-Mobility dataset. Once downloaded, extract the data
and put it directly in the project folder. The data root can be configured with system.datamodule.root=<path/to/your/data/directory>
in configs/cage.yaml
file. If you find it slow to download the data from our server, please try this alternative link on OneDrive.
Run python main.py --config configs/cage.yaml --log_dir <folder/for/logs>
to train the model from the scratch. The experiment files will be recorded at ./<log_dir>/cage/<version>
. The original model was trained on two NVIDIA A40 GPUs.
Please cite our work if you find it helpful:
@article{liu2023cage,
author = {Liu, Jiayi and Tam, Hou In Ivan and Mahdavi-Amiri, Ali and Savva, Manolis},
title = {{CAGE: Controllable Articulation GEneration}},
year = {2023},
eprint = {2312.09570},
archivePrefix = {arXiv}
}
This implementation is partially powered by 🤗Diffusers.