Closed darius-chan-ml closed 2 years ago
You can align them manually. See this examples on how to retrieve intrinsic an extrinsic camera information: https://github.com/Kinovarobotics/kortex/blob/master/api_python/examples/500-Gen3_vision_configuration/01-vision_intrinsics.py https://github.com/Kinovarobotics/kortex/blob/master/api_python/examples/500-Gen3_vision_configuration/02-vision_extrinsics.py
Information on how to use that data to align rgb-depth image pair can be found in various sources including: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html
Hello, I have made a procedure over on my personal repo: https://github.com/IlianaKinova/cv_calibration It's an automated calibration procedure. Improvements are on the way. If you find issues, open them directly over there. @darius-chan-ml
Hi
I am working with the kinova gen 3 arm and its vision system. I've managed to get access to both the color and depth streams using RTSP. I'd like to align the streams using pyrealsense, however, since the streams are RTSPed, they are no longer in the pipeline suitable for the realsense library. Could I get assistance as to how this was done internally or point me towards the right resources to utilize the realsense library with the arm? I'm looking to get depth data and I've only seen tutorials utilizing the color streams and not the depth streams.
Thank you!