This repository contains instructions on getting the data and code of the work Dense Hand-Object(HO) GraspNet with Full Grasping Taxonomy and Dynamics
presented at ECCV 2024.
Project page : HOGraspNet
HOGraspNet provides the following data and models:
data/source_data
: Full 1920*1080 size RGB & Depth images ("Source_data/Object Pose" is unnecessary data. It will be removed soon.)data/labeling_data
: Json files for annotations.data/extra_data
: Binary hand & object mask data for cropped image. (Bounding box is provided through the dataloader module.)data/source_augmented
: Cropped images around the hand and background augmented RGB images.data/obj_scanned_models
: Manually scanned 3D models for 30 objects utilized in the dataset.This code is tested with PyTorch 2.0.0, 2.3.1 and Python 3.10 on Linux and Windows 11.
Clone and install the following main packages.
git clone https://github.com/UVR-WJCHO/HOGraspNet.git
cd HOGraspNet
pip install -r requirements.txt
(TBD, for visualization) Install pytorch3d following here (our code uses version 0.7.3)
Set the environment variable for dataset path
(linux) export HOG_DIR=/path/to/HOGGraspNet
(windows) set HOG_DIR=/path/to/HOGGraspNet
Please fill this form to download the dataset after reading the terms and conditions.
Copy the data URL from the form, download it and unzip.
cd assets
wget -O urls.zip "[URL]"
unzip urls.zip
cd ..
After running the above, you should expect:
HOGraspNet/assets/urls/
images.txt: Full RGB & Depth images
annotations.txt: annotations
extra.txt: hand & object segmentation masks(pseudo)
images_augmented.txt: Cropped & background augmented RGB images
Download procedure
⚠️ [24.08.14] The connection to Dropbox shared links might be unstable. We are testing alternative methods to ensure a stable download environment.
Download the dataset
with default option:
python scripts/download_data.py
or with maual option (example):
python scripts/download_data.py --type [TYPE] --subject [SUBJECT] --objModel [OBJMODEL]
Unzip them all:
python scripts/unzip_data.py
The raw downloaded data can be found under data/zipped/
. The unzipped data and models can be found under data/
. See visualization.md
for the explanation of how the files can be visualized.
Options
Depending on your usage of the dataset, we suggest different download options.
[TYPE] (type: int, default: 0):
[SUBJECT] (type: string, default: all):
[OBJMODEL] (type: bool, default : True):
⚠️ If the full dataset is not downloaded (e.g., setting the subject option to "small" or a specific subject index), only the s0 split is fully available in the dataloader.
Subject info
Here, we provide a summary of each subject's information included in the dataset. HOGraspNet_subject_info
Please check it if you need data on a specific type of subject.
Set the environment variable for dataset path
export HOG_DIR=/path/to/HOGGraspNet
Utilize the dataloader as below
from scripts.HOG_dataloader import HOGDataset
setup = 's2'
split = 'test'
db_path = os.path.join(os.environ['HOG_DIR'], "data")
dataloader = HOGDataset(setup, split, db_path)
See data_structure.md
for detailed structures of the sample from dataloader (WIP)
This code is tested with Python 3.10, cuda 11.8, PyTorch 2.0.0 on Linux and Window 11.
Set MANO code and models in thirdparty
from the MANO project page(https://mano.is.tue.mpg.de).
print 'FINITO'
(line 137) statement in thirdparty/mano_v1_2/webuser/smpl_handpca_wrapper_HAND_only.py
.mano_path
parameter in scripts/config.py
We utilized a differentiable MANO layer for PyTorch from https://github.com/hassony2/manopth. thirdparty/manopth
Set required CUDA, torch, pytorch3d environments.
pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py310_cu118_pyt200/download.html
Run the code; this will produce rendered results of 10 random samples from downloaded data.
python scripts/visualization.py
Rendered images will be saved in [HOG_DIR]/vis/
.
As HOGraspNet has captured on a monotonic environment, background augmentations are available with random images. We utilized samples from HanCo dataset as background sources.
Here, we provide example codes for manual background augmentation on source data. This will require source_augmented
and extra_data
of the dataset. ([TYPE] 2 & 4
in download options)
Download or manually prepare background sample images in [HOG_DIR]/data/bg_samples
.
Run the code; this will produce augmented rgb images in [HOG_DIR]/data/manual_augmented
.
python scripts/manual_augmentation.py
The download and use of the dataset is released for academic research only and it is free to researchers from educational or research institutes for non-commercial purposes. When downloading the dataset you agree to (unless with expressed permission of the authors): not redistribute, modificate, or commercial usage of this dataset in any way or form, either partially or entirely.
If using this dataset, please cite the following paper:
@inproceedings{2024graspnet,
title={Dense Hand-Object(HO) GraspNet with Full Grasping Taxonomy and Dynamics},
author={Cho, Woojin and Lee, Jihyun and Yi, Minjae and Kim, Minje and Woo, Taeyun and Kim, Donghwan and Ha, Taewook and Lee, Hyokeun and Ryu, Je-Hwan and Woo, Woontack and Kim, Tae-Kyun},
booktitle={ECCV},
year={2024}
}
이 연구는 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아 구축된 "물체 조작 손 동작 3D 데이터"을 활용하여 수행된 연구입니다. 본 연구에 활용된 데이터는 AI 허브(aihub.or.kr)에서 다운로드 받으실 수 있습니다. This research (paper) used datasets from 'The Open AI Dataset Project (AI-Hub, S. Korea)'. All data information can be accessed through 'AI-Hub (www.aihub.or.kr)'.