zc-alexfan / hold

[CVPR 2024✨Highlight] Official repository for HOLD, the first method that jointly reconstructs articulated hands and objects from monocular videos without assuming a pre-scanned object template and 3D hand-object training data.
https://zc-alexfan.github.io/hold
MIT License
301 stars 7 forks source link
3d-reconstruction ai artificial-intelligence augmented-reality computer-vision hand-object-interaction hand-object-reconstruction hand-tracking mano mixed-reality neural-networks pose-estimation pytorch virtual-reality

[CVPR'24 Highlight] HOLD: Category-agnostic 3D Reconstruction of Interacting Hands and Objects from Video

👉I plan to enter the job market in Summer/Fall 2025. If you have an openning, feel free to email!👈

Image

[ Project Page ] [ Paper ] [ SupMat ] [ ArXiv ] [ Video ] [ HOLD Account ] [ ECCV'24 HOLD+ARCTIC Challenge ]

Authors: Zicong Fan, Maria Parelli, Maria Eleni Kadoglou, Muhammed Kocabas, Xu Chen, Michael J. Black, Otmar Hilliges

News

🚀 Register a HOLD account here for news such as code release, downloads, and future updates!

Image

This is a repository for HOLD, a method that jointly reconstructs hands and objects from monocular videos without assuming a pre-scanned object template.

HOLD can reconstruct 3D geometries of novel objects and hands:

Image Image

Potential directions from HOLD

Image

Features

TODOs

Documentation

Getting started

Get a copy of the code:

git clone https://github.com/zc-alexfan/hold.git
cd hold; git submodule update --init --recursive
  1. Setup environments

    • Follow the instructions here: docs/setup.md.
    • You may skip external dependencies for now.
  2. Train on a preprocessed sequence

    • Start with one of our preprocessed in-the-wild sequences, such as hold_bottle1_itw.
    • Familiarize yourself with the usage guidelines in docs/usage.md for this preprocessed sequence.
    • This will enable you to train, render HOLD, and experiment with our interactive viewer.
    • At this stage, you can also explore the HOLD code in the ./code directory.
  3. Set up external dependencies and process custom videos

    • After understanding the initial tools, set up the "external dependencies" as outlined in docs/setup.md.
    • Preprocess the images from the hold_bottle1_itw sequence by following the instructions in docs/custom.md.
    • Train on this sequence to learn how to build a custom dataset.
    • You can capture your own custom video and reconstruct it in 3D at this point.
    • Most preprocessing artifact files are documented in docs/data_doc.md, which you can use as a reference.
  4. Two-hand setting: Bimanual category-agnostic reconstruction

    • At this point, you can preprocess and train on a custom single-hand sequence.
    • Now you can take on the bimanual category-agnostic reconstruction challenge!
    • Following the instruction in docs/arctic.md to reconstruct two-hand manipulation of ARCTIC sequences.

Official Citation

@inproceedings{fan2024hold,
  title={{HOLD}: Category-agnostic 3d reconstruction of interacting hands and objects from video},
  author={Fan, Zicong and Parelli, Maria and Kadoglou, Maria Eleni and Kocabas, Muhammed and Chen, Xu and Black, Michael J and Hilliges, Otmar},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={494--504},
  year={2024}
}

✨CVPR 2023: ARCTIC is a dataset that includes accurate body/hand/object poses, multi-view RGB videos for articulated object manipulation. See our project page for details.

ARCTIC demo

Star History

Star History Chart

Contact

For technical questions, please create an issue. For other questions, please contact the first author.

Acknowledgments

The authors would like to thank: Benjamin Pellkofer for IT/web support; Chen Guo, Egor Zakharov, Yao Feng, Artur Grigorev for insightful discussion; Yufei Ye for DiffHOI code release.

Our code benefits a lot from Vid2Avatar, aitviewer, VolSDF, NeRF++ and SNARF. If you find our work useful, consider checking out their work.