SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation
Jiehong Lin, Lihua Liu, Dekun Lu, Kui Jia
CVPR 2024
SEGMENTOR_MODEL=fastsam
in demo.sh.In this work, we employ Segment Anything Model as an advanced starting point for zero-shot 6D object pose estimation from RGB-D images, and propose a novel framework, named SAM-6D, which utilizes the following two dedicated sub-networks to realize the focused task:
Please clone the repository locally:
git clone https://github.com/JiehongLin/SAM-6D.git
Install the environment and download the model checkpoints:
cd SAM-6D
sh prepare.sh
We also provide a docker image for convenience.
# set the paths
export CAD_PATH=Data/Example/obj_000005.ply # path to a given cad model(mm)
export RGB_PATH=Data/Example/rgb.png # path to a given RGB image
export DEPTH_PATH=Data/Example/depth.png # path to a given depth map(mm)
export CAMERA_PATH=Data/Example/camera.json # path to given camera intrinsics
export OUTPUT_DIR=Data/Example/outputs # path to a pre-defined file for saving results
# run inference
cd SAM-6D
sh demo.sh
If you find our work useful in your research, please consider citing:
@article{lin2023sam,
title={SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation},
author={Lin, Jiehong and Liu, Lihua and Lu, Dekun and Jia, Kui},
journal={arXiv preprint arXiv:2311.15707},
year={2023}
}
If you have any questions, please feel free to contact the authors.
Jiehong Lin: mortimer.jh.lin@gmail.com
Lihua Liu: lihualiu.scut@gmail.com
Dekun Lu: derkunlu@gmail.com
Kui Jia: kuijia@gmail.com