Open russelldj opened 4 months ago
As far as I can tell, this data is heavily pre-processed by the GoPro so it follows a very standardized form. Each pixel step in the horizontal direction is an equal step in the azimuth angle that it represents. And every step in the vertical direction is an equal step in elevation. So pixels represent a square field of view at the vertical middle of the image and increasingly narrow slices as you go toward the top or bottom.
Long story short, as far as I can tell, metashape doesn't actually need to report any parameters for us to be able to use this data.
Unfortunately, I think this problem is starting to show some downsides to how this code is structured. But for now, I think the easiest way to approach this problem is as follows:
geograypher.PhotogrammetryCamera
class to supports 360 cameras. geograypher.PhotogrammetryCameraSet
that supports reading in this information. You may be able to modify MetashapeCameraSet
to do this, or create another class (e.g. MetashapeSphericalCameraSet
) to do it.Pytorch3d
camera model to handle the distortion (e.g. following the example in the fisheye camera). The math for this distortion will have to be worked out so that it represents the distortion of treating each portion as a perspective projection. This will almost certainly not be a type of distortion seen in a real camera.
360 degree cameras can be useful for mapping understories. Currently, we only support traditional "perspective" cameras. It would be useful to think about what changes we'd need for 360 cameras. This would likely require both updated renders that support 360 cameras and additional metadata/new ways of handling images taken from 360 cameras.