Open jprodriguezg opened 1 year ago
Hi!
In case it might help anyone, here is how I was able to get the same values as the provided maps:
import cv2
import numpy as np
import yaml
with open("/path/to/camchain-imucam-indoor_flying.yaml", "r") as stream:
yaml_data = yaml.safe_load(stream)
davis_intrinsics = np.array(yaml_data["cam0"]["intrinsics"])
davis_dist = np.array(yaml_data["cam0"]["distortion_coeffs"])
davis_rect = np.array(yaml_data["cam0"]["rectification_matrix"])
davis_proj = np.array(yaml_data["cam0"]["projection_matrix"])
K = np.array([[davis_intrinsics[0], 0, davis_intrinsics[2]],
[0, davis_intrinsics[1], davis_intrinsics[3]],
[0, 0, 1]])
pts_to_rectify = np.array([[[0., 0.],
[0., 1.],
[0., 2.]]])
rectified_pts = cv2.fisheye.undistortPoints(pts_to_rectify, K, davis_dist, None, davis_rect, davis_proj)
print(rectified_pts)
Hello, thanks for the cool dataset!
I am wondering how the maps to rectify distorted pixels were computed. I am trying to obtain the same results using
cv2.fisheye.undistortPoints
as suggested here. However, the results are different from those provided in$SEQUENCE_(left/right)_(x/y)_map.txt
.For instance, these are some results:
Pixels to rectify
Output from indoor_flyingleft(left/right)_(x/y)_map.txt
Output from
cv2.fisheye.undistortPoints
Could someone please explain how these maps were computed?
Thanks!