mapillary / OpenSfM

Open source Structure-from-Motion pipeline
https://www.opensfm.org/
BSD 2-Clause "Simplified" License
3.39k stars 859 forks source link

Calibration of Spherical Images together with Perspective Images #898

Open one-zero-zero opened 2 years ago

one-zero-zero commented 2 years ago

Hi,

I'm trying to calibrate 360 images captured from Go Pro Max with regular perspective images and the feature matching stage fails some percentage of the time (mostly in indoors settings). This, I believe, is because feature extraction does not differentiate Spherical images from Perspective ones and so the feature descriptors are quite different due to the warped textures near the poles of spherical images compared to the perspective ones. This ends up generating so few matches between the 360 and perspective images.

I'd like to implement a new functionality to remedy this. My idea is while computing the features (& descriptors) of spherical images, I will compute cubemap images from sphericals and extract the features on the cubemap textures. I'll then map the pixel locations from cubemap to spherical locations so rest of the pipeline will stay the same. So, my feature descriptors will then be living in perspective space. I'm also thinking about rotating the spherical image ( 45 around x and 45 degrees around y axes ) and extract features on the new cubemap and augment the original set with these to also extract features around cubemap image boundaries.

This should also improve matching near the poles of the 360 images as well - I noticed that most matches close to the poles are also outliers between the 360-360 matches -- when the camera moves around, pole location changes and so different parts of the scene get warped. This cubemap based feature descriptor computation should improve things for that scenario as well.

It looks straightforward to me but I wanted to ask your opinion before I start implementing this - do you have any recommendations ( or is there something I need to be careful about here ? )

cheers!

fabianschenk commented 2 years ago

Hi @one-zero-zero ,

Mapping the panoramas to perspective images using the cube map is a good direction to explore. This is already implemented in OpenSfM and you can just reuse this method: https://github.com/mapillary/OpenSfM/blob/746c6c0f6ec167ccb655fd623a9b9ec831b37c26/opensfm/undistort.py#L185-L197

Best, Fabian

one-zero-zero commented 2 years ago

Thanks @fabianschenk - Yes I've noticed the code segment inside undistort.py and was already planning to reuse it.