Open Engineer21-a opened 1 day ago
Hi @Engineer21-a Thaks very much for your questions.
Calibration Questions
I recommend using the intrinsics provided by pyrealsense2, as data from official RealSense tools is typically more reliable than those from tools that were not specifically designed for use with RealSense.
Each individual camera has its own unique intrinsic values due to the manufacturing process at the factory.
https://github.com/IntelRealSense/librealsense/issues/4061 has a Python case where the RealSense user created an rs2.intrinsics object so they could define custom intrinsics.
If your project makes use of OpenCV code then you could consider removing the distortion model from the RealSense image using OpenCV's undistort() function. I do not believe that doing so will make a significant difference though.
Settings and Accuracy
D405 is the only RealSense camera model that is officially rated as having sub-millimeter accuracy and is designed to provide high quality, high accuracy RGB and depth images at close range to objects.
I would first suggest trying maximizing the RGB Sharpness option to a value of '100', which can greatly sharpen the RGB image and make fine detail such as barcodes much clearer. https://github.com/IntelRealSense/librealsense/issues/11912#issuecomment-1600753535 has Python code for doing so.
There was a past case at the link below that used the infrared image for Aruco detection instead of the RGB image.
Ideas or Suggestions for High Accuracy
Setting the RGB image to maximum Brightness, as mentioned above, and using the largest RGB resolution available is all I can think of that would make a positive difference to accuracy for RGB.
Hi @MartyG-RealSense,
Thank you for your detailed response and suggestions!
I followed your advice regarding the calibration and settings, particularly maximizing the sharpness to 100 and using the highest resolution for the RGB stream. I did observe a slight improvement in the accuracy of ArUco marker detection, with measurement errors decreasing from around 0.5 mm to 0.9 mm. This aligns with your recommendation of enhancing image clarity for fine details.
Given the improvements achieved with RGB optimization, I’m curious if there are additional possibilities for further increasing accuracy, specifically leveraging the depth information provided by the D435i. I understand that the D405 model is optimized for sub-millimeter accuracy, but since I’m working with the D435i, are there any specific methods or strategies that could utilize depth data to further refine the results? For instance:
Any insights or suggestions you could provide on how to best incorporate the depth information for high-accuracy detection would be greatly appreciated!
Thanks again for your support, and I look forward to your response.
https://github.com/IntelRealSense/librealsense/issues/8961 has a RealSense user's C++ example of detecting an Aruco tag's XYZ coordinates and getting its distance from the camera using depth information. I know you are using Python but it might give you some useful ideas.
There was once a project for the RealSense Unity wrapper that created a depth pointcloud of an Aruco image that was textured with RGB data.
https://github.com/ajingu/RealSense?tab=readme-ov-file#aruco
There is also a research paper that used depth, infrared and color from a RealSense camera
https://pure.tue.nl/ws/portalfiles/portal/173218118/Meijer_M..pdf
In general though, RGB or infrared images are most commonly used for image tag detection. If depth could significantly enhance Aruco detection then it likely would have been used already to do so.
Issue Description
Hello RealSense team,
I am currently working on a project where I need to measure the relative distance and angle between two stationary markers (2x AprilTAG or 2x STag) using only the RGB sensor for now. The goal is to achieve high accuracy with an error margin of less than 3-4 mm for distances between 100 mm and 1000 mm. The camera is mounted with its optical axis at a 90° angle and 1 meter above the plane with the markers. However, the markers are positioned off-center, either to the left or right of the plane.
Here is a more detailed breakdown of my setup and the challenges I'm facing:
Project Setup and Goal:
Calibration Questions:
pyrealsense2
. Could you provide guidance on how to handle this discrepancy?pyrealsense2
or my own calibration results for better accuracy?Settings and Accuracy:
Ideas or Suggestions for High Accuracy:
The accuracy of these distance measurements is critical for my project, so I would greatly appreciate any guidance you can offer to help me achieve this with a single camera and two stationary markers.
Thank you for your support!