SurfEmb: Dense and Continuous Correspondence Distributions
for Object Pose Estimation with Learnt Surface Embeddings
Rasmus Laurvig Haugard, Anders Glent Buch
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022
pre-print |
project-site
The easiest way to explore correspondence distributions is through the project site.
The following describes how to reproduce the results.
Download surfemb:
$ git clone https://github.com/rasmushaugaard/surfemb.git
$ cd surfemb
All following commands are expected to be run in the project root directory.
Install conda , create a new environment, surfemb, and activate it:
$ conda env create -f environment.yml
$ conda activate surfemb
Download and extract datasets from the BOP site. Base archive, and object models are needed for both training and inference. For training, PBR-BlenderProc4BOP training images are needed as well, and for inference, the BOP'19/20 test images are needed.
Extract the datasets under data/bop
(or make a symbolic link).
Download a trained model (see releases):
$ wget https://github.com/rasmushaugaard/surfemb/releases/download/v0.0.1/tless-2rs64lwh.compact.ckpt -P data/models
OR
Train a model:
$ python -m surfemb.scripts.train [dataset] --gpus [gpu ids]
For example, to train a model on T-LESS on cuda:0
$ python -m surfemb.scripts.train tless --gpus 0
We use the detections from CosyPose's MaskRCNN models, and sample surface points
evenly for inference.
For ease of use, this data can be downloaded and extracted as follows:
$ wget https://github.com/rasmushaugaard/surfemb/releases/download/v0.0.1/inference_data.zip
$ unzip inference_data.zip
OR
To see pose estimation examples on the training images run
$ python -m surfemb.scripts.infer_debug [model_path] --device [device]
[device] could for example be cuda:0 or cpu.
Add --real
to use the test images with simulated crops based on the ground truth poses, or further
add --detections
to use the CosyPose detections.
Inference is run on the (real) test images with CosyPose detections:
$ python -m surfemb.scripts.infer [model_path] --device [device]
Pose estimation results are saved to data/results
.
To obtain results with depth (requires running normal inference first), run
$ python -m surfemb.scripts.infer_refine_depth [model_path] --device [device]
The results can be formatted for BOP evaluation using
$ python -m surfemb.scripts.misc.format_results_for_eval [poses_path]
Either upload the formatted results to the BOP Challenge website or evaluate using the BOP toolkit.
Custom dataset: Format the dataset as a BOP dataset and put it in data/bop.