[Paper Page] [Paper ]
GrabNet is a generative model for 3D hand grasps. Given a 3D object mesh, GrabNet can predict several hand grasps for it. GrabNet has two succesive models, CoarseNet (cVAE) and RefineNet. It is trained on a subset (right hand and object only) of GRAB dataset. For more details please refer to the Paper or the project website.
Below you can see some generated results from GrabNet:
Binoculars | Mug | Camera | Toothpaste |
---|---|---|---|
Check out the YouTube videos below for more details.
Long Video | Short Video |
---|---|
This implementation:
This package has the following requirements:
To install the dependencies please follow the next steps:
git clone https://github.com/otaheri/GrabNet
pip install -r requirements.txt
For a quick demo of GrabNet you can give it a try on google-colab here.
Inorder to use the GrabNet model please follow the below steps:
GrabNet
├── grabnet
│ │
│ ├── models
│ │ └── coarsenet.pt
│ │ └── refinenet.pt
│ │ │
git clone https://github.com/otaheri/GrabNet
Run the following command to extract the ZIP files.
python grabnet/data/unzip_data.py --data-path $PATH_TO_FOLDER_WITH_ZIP_FILES \
--ectract-path $PATH_TO_EXTRACT_DATASET_TO
GRAB
├── data
│ │
│ ├── bps.npz
│ └── obj_info.npy
│ └── sbj_info.npy
│ │
│ └── [split_name] from (test, train, val)
│ │
│ └── frame_names.npz
│ └── grabnet_[split_name].npz
│ └── data
│ └── s1
│ └── ...
│ └── s10
└── tools
│
├── object_meshes
└── subject_meshes
After installing the GrabNet package, dependencies, and downloading the data and the models from mano website, you should be able to run the following examples:
python grabnet/tests/grab_new_objects.py --obj-path $NEW_OBJECT_PATH \
--rhm-path $MANO_MODEL_FOLDER
python grabnet/tests/test.py --rhm-path $MANO_MODEL_FOLDER \
--data-path $PATH_TO_GRABNET_DATA
To retrain GrabNet with a new configuration, please use the following code.
python train.py --work-dir $SAVING_PATH \
--rhm-path $MANO_MODEL_FOLDER \
--data-path $PATH_TO_GRABNET_DATA
python eval.py --rhm-path $MANO_MODEL_FOLDER \
--data-path $PATH_TO_GRABNET_DATA
@inproceedings{GRAB:2020,
title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects},
author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2020},
url = {https://grab.is.tue.mpg.de}
}
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions in the LICENSE file and any accompanying documentation before you download and/or use the GRAB data, model and software, (the "Data & Software"), including 3D meshes (body and objects), images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
Special thanks to Mason Landry for his invaluable help with this project.
We thank:
The code of this repository was implemented by Omid Taheri and Nima Ghorbani.
For questions, please contact grab@tue.mpg.de.
For commercial licensing (and all related questions for business applications), please contact ps-licensing@tue.mpg.de.