LinDixuan / OmniHands

34 stars 1 forks source link

OmniHands

teaser

Creating Environment

conda create --name omhand python=3.10
conda activate omhand
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu117
pip install -e .[all]
pip install -v -e third-party/ViTPose

Download Checkpoints

We provide models and checkpoints that can deal with following tasks: Single Image Reconstruction, Video Reconstruction, Multiview Reconstruction.

Download our checkpoint Demo_Video.pth, Demo_Image.pth, Demo_Multiview.pth, Eval_Video.pth, and put them under ./checkpoints . The Demo checkpoints are for in-the-wild demo, and the Eval checkpoint is for dataset validation on Interhand2.6m.

And download mano files from MANO, put MANO_RIGHT.pkl and MANO_LEFT.pkl under _DATA/data/mano.

Demo

Video

Set VIDEO_PATH as your video file path

python run_demo.py \
    --checkpoint ./checkpoints/Demo_Video.pth \
    --cfg ./checkpoints/config_video.yaml \
    --video_dir VIDEO_PATH \
    --out_dir ./demo_out \
    --gpu 0 \
    --mode video

Images

Put all images in IMAGE_FOLDER , run:

python run_demo.py \
    --checkpoint ./checkpoints/Demo_Image.pth \
    --cfg ./checkpoints/config_image.yaml \
    --image_dir IMAGE_FOLDER \
    --out_dir ./demo_out \
    --gpu 0 \
    --mode image

Multiview

Put multi-view images in IMAGE_FOLDER, run the following code

python run_demo.py \
    --checkpoint ./checkpoints/Demo_Multiview.pth \
    --cfg ./checkpoints/config_multi.yaml \
    --image_dir IMAGE_FOLDER \
    --out_dir ./demo_out \
    --gpu 0 \
    --mode multi

Acknowledge

HaMeR

ViTPose

IntagHand

Deformer