I am now working with pyrealsense to get the rgb and depth images with a SR300. As they are from two different cameras, there is alignment (s. following image) between them. I think it could be "deleted" after the calibration.
I've tried to calibrate the camera with 2 methods. First is with the ROS package camera_calibration, and the result is a camera.yaml, which could be converted to other format. The other is to use the OpenCV to do the calibration, which could be found here, and the result here is a NumPy document called 'calibration_data.npz' (like many other exsamples like this)
The problem is I cannot find how to load the calibration data by pyrealsense. Could you simply explain how I can load one of those data? Thanks a lot in advance!
[Complement]
I found that in the setup of offline.py there are functions for saving & loading the intrinsics of color & depth cameras from yaml. But I don't know how to use it since the test_offline.py doesn't call the functions. And I wonder if the online usage is also based on the same principle.
[Complement]
Now I know that it is not a problem with pyrealsense because the intrinsics of RealSense SR300 is fixed. But We can use OpenCV to calibrate the frames. These are the codes for it and I think this issue could be closed.
Hello,
I am now working with pyrealsense to get the rgb and depth images with a SR300. As they are from two different cameras, there is alignment (s. following image) between them. I think it could be "deleted" after the calibration.
I've tried to calibrate the camera with 2 methods. First is with the ROS package camera_calibration, and the result is a camera.yaml, which could be converted to other format. The other is to use the OpenCV to do the calibration, which could be found here, and the result here is a NumPy document called 'calibration_data.npz' (like many other exsamples like this)
The problem is I cannot find how to load the calibration data by pyrealsense. Could you simply explain how I can load one of those data? Thanks a lot in advance!
[Complement]
I found that in the setup of offline.py there are functions for saving & loading the intrinsics of color & depth cameras from yaml. But I don't know how to use it since the test_offline.py doesn't call the functions. And I wonder if the online usage is also based on the same principle.
[Complement]
Now I know that it is not a problem with pyrealsense because the intrinsics of RealSense SR300 is fixed. But We can use OpenCV to calibrate the frames. These are the codes for it and I think this issue could be closed.