Closed karlita101 closed 3 years ago
Hi @karlita101 The camera is calibrated in the factory when new, though you can perform recalibration of the camera's sensors as many times as you feel necessary.
Likewise, when to perform calibration can be a matter of personal evaluation. Sometimes an image may look so incorrect that performing a robust and thorough sensor calibration with the Dynamic Calibrator tool is an obvious course of action.
If the image is less obviously incorrect (e.g it looks okay but you feel that the measurement accuracy is not as good as it should be) then you can perform a quick calibration with the On-Chip Calibration tool that is built into the RealSense Viewer program and receive a 'health check' value for the camera's calibration. You could also schedule regular calibrations checks with the On-Chip Calibration, such as once per week, to ensure that the camera's calibration remains in healthy condition.
If you are referring to the OEM version of the Dynamic Calibrator software that calibrates both intrinsics and extrisics and is supplied with the $1500 USD OEM Calibration Target system when you mention OEM: most RealSense users will not require this, as the extrinsics-only calibration of the standard version of the Dynamic Calibrator software is sufficient in the majority of cases. The OEM Calibration Target system is aimed at engineering departments and manufacturing facilities.
As mentioned above, you can perform Dynamic Calibration as many times as you feel is required, though for regular checks the On-Chip Calibration tool will likely be sufficient.
I believe that for the marker tracking application that you have in mind, one of the standard calibration tools rather than a custom solution developed by yourself is likely to be sufficient.
Great.
Thank you Marty! Would you recommend the same for the IMU?
I am wondering if you might be able to point me in the right direction for marker tracking and positioning.
Thank you again
( I am working with Python and Open CV but open to any advice/direction
The IMU of the D435i is calibrated using a different tool, a Python script that is described in Intel's IMU calibration white-paper document. Like with the other calibration tools, you can repeat IMU calibration as many times as you feel necessary.
https://dev.intelrealsense.com/docs/imu-calibration-tool-for-intel-realsense-depth-camera
You can find OpenCV and Python information resources for aruco by googling for aruco opencv python.
There is an.aruco tutorial for OpenCV with Python code here:
https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/
Thank you again Marty, I appreciate all the help!
I was not sure as most of what I have found from previous user posts regarding getting the XYZ tracking coordinates has been mostly implemented with ROS or SLAM. Is this typically a better way?
( I am hoping to mark a few different pieces of equipment and get their position/locational data in time)
An alternative to tracking object pose with tags is to track the color of the object.
https://support.intelrealsense.com/hc/en-us/community/posts/360051876853/comments/360013257094
You can also train a machine-learning system to recognize individual objects, calculate their pose and work out how to pick up, handle and drop them if necessary.
@MartyG-RealSense
Hi Marty, I've done some aruco detection and pose estimation since the last post, using only the RGB camera stream. I comparing the intrinsic parameters pulled directly from the camera vs using the standard ( Open CV) checkerboard for the RBG, there's quite a bit of discrepancy.
Similarly the distortion coefficients from the same open CV yields non-zeros, where as the D435 system pulls all zero parameters.
My questions are the following
the differences are quite small <10% for the intrinsic parameters, so still trying to debate on whether or not to move forward with the intel vs the ones I derived. Will using another set of intrinsic RGB parameters cause issues when trying to align the RGB to Depth video stream?>**
OpenCV | **fx | ** | cx | fy | cy | ||||
---|---|---|---|---|---|---|---|---|---|
566.94 | 0 | 310.38 | 0 | 566.62 | 255.85 | 0 | 0 | 1 | |
INTEL | fx | cx | fy | cy | |||||
613.171 | 0 | 328.574 | 0 | 613.014 | 249.063 | 0 | 0 | 1 |
https://github.com/IntelRealSense/librealsense/issues/1430#issuecomment-375945916
Hopefully this will help to answer your question about whether it is worth using non-zero coefficients instead of zeroed ones.
Basically, when:
In regard to undistorting with coefficients: OpenCV' has an undistort instruction.
https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistort
https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device
Hi @karlita101 Do you still require assistance with this case, please? Thanks!
Thakns very much @karlita101 for the update!
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Hi there, I am working with the D435i and looking to see if I can use it to track specific markers in space and get their position in space ( in x,y,z, coordinates with respect to the stationary depth camera).
I am trying to determine on whether on not a calibration would be necessary, so looking to get advice on 1) how to evaluate calibration is needed 2) how to choose between the custom (checkerboard method) vs. OEM vs. Dynamic