IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.6k stars 4.83k forks source link

Calibration Clarification D435i #7550

Closed karlita101 closed 3 years ago

karlita101 commented 4 years ago

Required Info
Camera Model { D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.<?>.<?> }
Language {opencv/python}
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

<Hi there, I am working with the D435i and looking to see if I can use it to track specific markers in space and get their position in space ( in x,y,z, coordinates with respect to the stationary depth camera).

I am trying to determine on whether on not a calibration would be necessary, so looking to get advice on 1) how to evaluate calibration is needed 2) how to choose between the custom (checkerboard method) vs. OEM vs. Dynamic

MartyG-RealSense commented 4 years ago

Hi @karlita101 The camera is calibrated in the factory when new, though you can perform recalibration of the camera's sensors as many times as you feel necessary.

Likewise, when to perform calibration can be a matter of personal evaluation. Sometimes an image may look so incorrect that performing a robust and thorough sensor calibration with the Dynamic Calibrator tool is an obvious course of action.

If the image is less obviously incorrect (e.g it looks okay but you feel that the measurement accuracy is not as good as it should be) then you can perform a quick calibration with the On-Chip Calibration tool that is built into the RealSense Viewer program and receive a 'health check' value for the camera's calibration. You could also schedule regular calibrations checks with the On-Chip Calibration, such as once per week, to ensure that the camera's calibration remains in healthy condition.

If you are referring to the OEM version of the Dynamic Calibrator software that calibrates both intrinsics and extrisics and is supplied with the $1500 USD OEM Calibration Target system when you mention OEM: most RealSense users will not require this, as the extrinsics-only calibration of the standard version of the Dynamic Calibrator software is sufficient in the majority of cases. The OEM Calibration Target system is aimed at engineering departments and manufacturing facilities.

As mentioned above, you can perform Dynamic Calibration as many times as you feel is required, though for regular checks the On-Chip Calibration tool will likely be sufficient.

I believe that for the marker tracking application that you have in mind, one of the standard calibration tools rather than a custom solution developed by yourself is likely to be sufficient.

karlita101 commented 4 years ago

Great.

Thank you Marty! Would you recommend the same for the IMU?

I am wondering if you might be able to point me in the right direction for marker tracking and positioning.

Thank you again

( I am working with Python and Open CV but open to any advice/direction

MartyG-RealSense commented 4 years ago

The IMU of the D435i is calibrated using a different tool, a Python script that is described in Intel's IMU calibration white-paper document. Like with the other calibration tools, you can repeat IMU calibration as many times as you feel necessary.

https://dev.intelrealsense.com/docs/imu-calibration-tool-for-intel-realsense-depth-camera

You can find OpenCV and Python information resources for aruco by googling for aruco opencv python.

There is an.aruco tutorial for OpenCV with Python code here:

https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/

karlita101 commented 4 years ago

Thank you again Marty, I appreciate all the help!

I was not sure as most of what I have found from previous user posts regarding getting the XYZ tracking coordinates has been mostly implemented with ROS or SLAM. Is this typically a better way?

( I am hoping to mark a few different pieces of equipment and get their position/locational data in time)

MartyG-RealSense commented 4 years ago

An alternative to tracking object pose with tags is to track the color of the object.

https://support.intelrealsense.com/hc/en-us/community/posts/360051876853/comments/360013257094

You can also train a machine-learning system to recognize individual objects, calculate their pose and work out how to pick up, handle and drop them if necessary.

https://www.intelrealsense.com/robot-grasping/

karlita101 commented 3 years ago

@MartyG-RealSense

Hi Marty, I've done some aruco detection and pose estimation since the last post, using only the RGB camera stream. I comparing the intrinsic parameters pulled directly from the camera vs using the standard ( Open CV) checkerboard for the RBG, there's quite a bit of discrepancy.

Similarly the distortion coefficients from the same open CV yields non-zeros, where as the D435 system pulls all zero parameters.

My questions are the following

  1. Generally: Is this something common that comes up?
  2. Is there a way to specify to the system to undistort the RGB camera stream using the non zero parameters?
  3. Is there way to also specify the system to use another intrinsic camera matrix ( the open cv one for example)

the differences are quite small <10% for the intrinsic parameters, so still trying to debate on whether or not to move forward with the intel vs the ones I derived. Will using another set of intrinsic RGB parameters cause issues when trying to align the RGB to Depth video stream?>**

OpenCV **fx **   cx   fy cy      
  566.94 0 310.38 0 566.62 255.85 0 0 1
INTEL fx   cx   fy cy      
  613.171 0 328.574 0 613.014 249.063 0 0 1
MartyG-RealSense commented 3 years ago
  1. The coefficients are all deliberately set to zero for the 400 Series cameras. Dorodnic the RealSense SDK Manager explains why in the link below.

https://github.com/IntelRealSense/librealsense/issues/1430#issuecomment-375945916

Hopefully this will help to answer your question about whether it is worth using non-zero coefficients instead of zeroed ones.

  1. The chart below details configurations in which RGB undistort should be enabled. I believe V = supported, x = unsupported and wip = work in progress

image

Basically, when:

In regard to undistorting with coefficients: OpenCV' has an undistort instruction.

https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistort

  1. I don't know of a way to override the coefficients being forced to zero. I would speculate (not certain on this point) that using software-device to define a virtual RealSense camera instead of using the physical RealSense camera hardware might provide a means to do so.

https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device

MartyG-RealSense commented 3 years ago

Hi @karlita101 Do you still require assistance with this case, please? Thanks!

MartyG-RealSense commented 3 years ago

Thakns very much @karlita101 for the update!