Our sensors (and hence our images) are rectangular, not square. But we know the field of view angular dimensions very accurately. When we call tetra3.solve_from_image(), what should we provide for the fov_estimate angle? The smaller of the camera's (width, height) fov angles? The larger of the two fov angles? Or the diagonal (i.e. sort (fov_width^2+fov_height^2)?
Our sensors (and hence our images) are rectangular, not square. But we know the field of view angular dimensions very accurately. When we call tetra3.solve_from_image(), what should we provide for the fov_estimate angle? The smaller of the camera's (width, height) fov angles? The larger of the two fov angles? Or the diagonal (i.e. sort (fov_width^2+fov_height^2)?