Closed LLHEVER closed 1 year ago
I think the following issue should answer your question: https://github.com/uzh-rpg/DSEC/issues/25
Let me know if you issue is resolved
@magehrig, thanks, but I want to konw whether the code you provided is suitable for the mapping the right image to the right event camera frame? If possible, do I just need to change the folder to the right folder?
you can use the same code with some slight adaptions. E.g. the cameras are different, that is you have to extract the correct calibration. For the naming convention check out the calibration section here: https://dsec.ifi.uzh.ch/data-format/
@magehrig Hi, I replace the number which stands for the corresponding camera, but there was a error when I changed the Rigid transformation from T_10 to T_23. I don't konw how to handle this problem. Could you help me to check why did this problem happen?
@magehrig Hello, I have checked the cam_to_cam.yaml file and I found that there isn't T_23 but T_32, However I think the number substitution should follow the rule of symmetry and due to lack of knowledge of camera calibration, I'm still afraid of whether the numbers are right? Could you help check whether the numbers are right? I would appreciate it if you could help me!Thank you very much!
That should work for you:
K_r2 = np.eye(3)
K_r2[[0, 1, 0, 1], [0, 1, 2, 2]] = conf['intrinsics']['camRect2']['camera_matrix']
K_r3 = np.eye(3)
K_r3[[0, 1, 0, 1], [0, 1, 2, 2]] = conf['intrinsics']['camRect3']['camera_matrix']
R_r2_2 = Rot.from_matrix(np.array(conf['extrinsics']['R_rect2']))
R_r3_3 = Rot.from_matrix(np.array(conf['extrinsics']['R_rect3']))
T_r2_2 = Transform.from_rotation(R_r2_2)
T_r3_3 = Transform.from_rotation(R_r3_3)
T_3_2 = Transform.from_transform_matrix(np.array(conf['extrinsics']['T_32']))
T_r2_r3 = T_r2_2 @ T_3_2.inverse() @ T_r3_3.inverse()
R_r2_r3_matrix = T_r2_r3.R().as_matrix()
P_r2_r3 = K_r2 @ R_r2_r3_matrix @ np.linalg.inv(K_r3)
OK, thank you very much! @magehrig
That should work for you:
K_r2 = np.eye(3) K_r2[[0, 1, 0, 1], [0, 1, 2, 2]] = conf['intrinsics']['camRect2']['camera_matrix'] K_r3 = np.eye(3) K_r3[[0, 1, 0, 1], [0, 1, 2, 2]] = conf['intrinsics']['camRect3']['camera_matrix'] R_r2_2 = Rot.from_matrix(np.array(conf['extrinsics']['R_rect2'])) R_r3_3 = Rot.from_matrix(np.array(conf['extrinsics']['R_rect3'])) T_r2_2 = Transform.from_rotation(R_r2_2) T_r3_3 = Transform.from_rotation(R_r3_3) T_3_2 = Transform.from_transform_matrix(np.array(conf['extrinsics']['T_32'])) T_r2_r3 = T_r2_2 @ T_3_2.inverse() @ T_r3_3.inverse() R_r2_r3_matrix = T_r2_r3.R().as_matrix() P_r2_r3 = K_r2 @ R_r2_r3_matrix @ np.linalg.inv(K_r3)
Thanks for your code! I am also working on Event-Intensity Stereo. This code assumes that the scene is far away, but close objects are still not perfectly aligned, which seems to affect the accuracy of Event-Intensity Stereo?
Hello, I am working on Event-Intensity Stereo. As the resolution of event and image is different, I want to konw how to align the events and intensity images from the event camera and RGB camera on the same side? Is it just a simple resize or linear interpolation operation?