nianticlabs / scoring-without-correspondences

[CVPR 2023] Two-view Geometry Scoring Without Correspondences
Other
80 stars 8 forks source link
camera-pose camera-pose-estimation computer-vision cvpr2023 essential-matrix fundamental-matrix pose-estimation ransac ransac-algorithm two-view two-view-geometry

Two-view Geometry Scoring Without Correspondences

This is the reference PyTorch implementation for testing the FSNet fundamental matrix scoring method described in

Two-view Geometry Scoring Without Correspondences

Axel Barroso-Laguna, Eric Brachmann, Victor Adrian Prisacariu, Gabriel Brostow and Daniyar Turmukhambetov

Paper, Supplemental Material

Patent pending. This code is for non-commercial use; please see the license file for terms. If you do find any part of this codebase helpful, please cite our paper using the BibTex below and link this repo. Thank you!

3 minute CVPR presentation video link

Overview

FSNet takes as input two RGB images and a fundamental matrix, and outputs the relative translation and rotation errors. Such errors are used as the scores to rank the fundamental matrices:

Setup

Assuming a fresh Anaconda distribution, you can install dependencies with:

conda env create -f resources/FSNet_environment.yml

We ran our experiments with PyTorch 1.11, CUDA 11.3, Python 3.9.16 and Debian GNU/Linux 10.

Running FSNet network

demo_inference.py can be used to select the best fundamental matrix in a pool according to FSNet scoring. We provide as an example two images, im_src.jpg and im_dst.jpg and fundamentals.npy, which contains sampled fundamental matrices for im_src.jpg and im_dst.jpg. Images and fundamental matrices are stored within the resources im_test folder. For a quick test, please run:

Arguments:

The demo script returns the top scoring fundamental matrix and its predicted translation and rotation errors. Optionally, the script also prints the epipolar lines corresponding to the selected fundamental matrix for easy inspection. See the example below:

BibTeX

If you use this code in your research, please consider citing our paper:


@inproceedings{barroso2023fsnet,
  title={Two-view Geometry Scoring Without Correspondences},
  author={Barroso-Laguna, Axel and Brachmann, Eric and Prisacariu, Victor and Brostow, Gabriel and Turmukhambetov, Daniyar},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}