This is the official source code for PVN3D: A Deep Point-wise 3D Keypoints Voting Network for 6DoF Pose Estimation, CVPR 2020. (PDF, Video_bilibili, Video_youtube).
pip3 install -r requirement.txt
sudo apt install python3-tk
python3 setup.py build_ext
Linemod_preprocessed/
to pvn3d/datasets/linemod/Linemod_preprocessed
:
ln -s path_to_unzipped_Linemod_preprocessed pvn3d/dataset/linemod/
YCB-Video: Download the YCB-Video Dataset from PoseCNN. Unzip it and link the unzippedYCB_Video_Dataset
to pvn3d/datasets/ycb/YCB_Video_Dataset
:
ln -s path_to_unzipped_YCB_Video_Dataset pvn3d/datasets/ycb/
cd pvn3d
python3 -m train.train_linemod_pvn3d --cls ape
The trained checkpoints are stored in train_log/linemod/checkpoints/{cls}/
, train_log/linemod/checkpoints/ape/
in this example.
# commands in eval_linemod.sh
cls='ape'
tst_mdl=train_log/linemod/checkpoints/${cls}/${cls}_pvn3d_best.pth.tar
python3 -m train.train_linemod_pvn3d -checkpoint $tst_mdl -eval_net --test --cls $cls
You can evaluate different checkpoint by revising tst_mdl
to the path of your target model.
ape_pvn3d_best.pth.tar
to train_log/linemod/checkpoints/ape/
. Then revise tst_mdl=train_log/linemod/checkpoints/ape/ape_pvn3d_best.path.tar
for testing.# commands in demo_linemod.sh
cls='ape'
tst_mdl=train_log/linemod/checkpoints/${cls}/${cls}_pvn3d_best.pth.tar
python3 -m demo -dataset linemod -checkpoint $tst_mdl -cls $cls
The visualization results will be stored in train_log/linemod/eval_results/{cls}/pose_vis
cd pvn3d
python3 -m datasets.ycb.preprocess_testset
python3 -m train.train_ycb_pvn3d
The trained model checkpoints are stored in train_log/ycb/checkpoints/
# commands in eval_ycb.sh
tst_mdl=train_log/ycb/checkpoints/pvn3d_best.pth.tar
python3 -m train.train_ycb_pvn3d -checkpoint $tst_mdl -eval_net --test
You can evaluate different checkpoint by revising the tst_mdl
to the path of your target model.
train_log/ycb/checkpoints/
and modify tst_mdl
for testing.# commands in demo_ycb.sh
tst_mdl=train_log/ycb/checkpoints/pvn3d_best.pth.tar
python3 -m demo -checkpoint $tst_mdl -dataset ycb
The visualization results will be stored in train_log/ycb/eval_results/pose_vis
cd lib/utils/dataset_tools/fps/
python3 setup.py build_ext --inplace
gen_obj_info.py
script:
cd ../
python3 gen_obj_info.py --help
PVN3D/pvn3d/common.py
PVN3D/pvn3d/datasets/ycb/ycb_dataset.py
(for multi objects of a scene) or PVN3D/pvn3d/datasets/linemod/linemod_dataset.py
(for single object of a scene). Note that you should modify or call the function that get your model info, such as 3D keypoints, center points, and radius properly.python3 -m datasets.linemod.linemod_dataset
.
PVN3D/pvn3d/lib/utils/pvn3d_eval_utils.py
.Please cite PVN3D if you use this repository in your publications:
@InProceedings{He_2020_CVPR,
author = {He, Yisheng and Sun, Wei and Huang, Haibin and Liu, Jianran and Fan, Haoqiang and Sun, Jian},
title = {PVN3D: A Deep Point-Wise 3D Keypoints Voting Network for 6DoF Pose Estimation},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}