IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.58k stars 4.82k forks source link

How to convert all cameras to the same world coordinates #10939

Closed lun73 closed 2 years ago

lun73 commented 2 years ago

Required Info
Camera Model D415
Operating System & Version Win10
Platform PC
Language C++

Issue Description

Hello everyone.

  1. I would like to ask if anyone knows how to write the calibration of intel realsense world coordinates in c++. After that, I would like to capture the point clouds from multiple cameras and merge them. I have two D415 at the moment, and I expect to use three of them later. I have checked the related programs, , but I still don't know much about it. At present, I have captured the internal and external parameters, but I don't know how to start with the rest, so I would like to ask those who have relevant experience to help.

  2. I would also like to ask if it is necessary to correct the intrinsics of this project.

MartyG-RealSense commented 2 years ago

Hi @lun73 You can activate calibration of individual cameras from C++ scripting and then write the calibration to the camera hardware using code described in Intel's self-calibration white paper guide at the link below.

https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras#appendix-d-on-chip-calibration-c-api

As you are interested in the calibration of world coordinates though, it sounds as though you wish to perform a different kind of calibration - calibrating the positions of multiple cameras relative to each other. Is that correct, please?

It is recommended to calibrate the positions of the cameras relative to each other when combining together data from multiple cameras. An example of this principle in Python rather than C++ is Intel's box_dimensioner_multicam RealSense example program - which you reference in another case at https://github.com/IntelRealSense/librealsense/issues/10872 - which uses a checkerboard image on the floor that the cameras are pointed at to automatically calibrate the cameras together when the program is launched.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam


The RealSense SDK has a C++ wrapper for multicam pointcloud 'stitching' here:

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/pointcloud/pointcloud-stitching

Instructions for it are here:

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/pointcloud/pointcloud-stitching/doc/pointcloud-stitching-demo.md


If your project is able to use ROS and the RealSense ROS compatibility wrapper then Intel have a guide at the link below for stitching together pointclouds from up to 3 RealSense cameras (2 cameras on 1 computer, or 3 cameras with 2 computers) into a single combined pointcloud.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/


If it is possible for the camera to be moved around in your project then you could look at the rs-kinfu C++ / OpenCV project. It can use a single camera that is moved around the scene to progressively build up a pointcloud image by fusing frames together, Once you are satisfied with the amount of detail in the pointcloud then you can export it to a .ply pointcloud data file.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu


If it is not compulsory for you to develop your own application in C++ then the RealSense-compatible commercial software tool RecFusion Pro (which supports D415 and other models) can calibrate together multiple cameras and generate a combined scan.

https://www.recfusion.net/products/

It can align pointclouds with its in-built calibration procedure, as described in Section 6 - Multi-Sensor Calibration of the RecFusion user guide here:

https://www.recfusion.net/user-guide/

RecFusion has versions for Windows 10 and 64-bit Ubuntu (18.04 and 20.04).

lun73 commented 2 years ago

Yes, I calibrated all the cameras to the same coordinate system

I have seen people use rs2_project_point_to_pixel,rs2_deproject_pixel_to_point,rs2_transform_point_to_point.

OpenCV findChessboardCorners()

I only write programs in C++ and the camera does not move in calibration.

MartyG-RealSense commented 2 years ago

The link below is for a RealSense multiple camera pointcloud stitching system developed by the CONIX Research Center at Carnegie Mellon that is scalable up to 20 cameras. It is written in C++ code, though it is complex in its setup.

https://github.com/conix-center/pointcloud_stitching

lun73 commented 2 years ago

How do I rotation and translation the camera to another camera? Can you provide me with actual C++ examples and code?

MartyG-RealSense commented 2 years ago

I researched your question carefully. Unfortunately there are few C++ references on the subject compared to the larger amount available for Python.

https://github.com/IntelRealSense/librealsense/issues/8333 may be a helpful reference for technical information about calibrating multiple cameras together, though it is written in terms of Python rather than C++. https://github.com/IntelRealSense/librealsense/issues/2664 is also worth looking at.

Accessing specialized external pointcloud library platforms such as Open3D or PCL from their compatibility wrappers in the RealSense SDK and using pointcloud stitching techniques like ICP registration may be a better way to achieve multi-camera pointcloud stitching from within a RealSense application.

Another possibility that you could explore is calibrating cameras together using fiducial image tags such as Aruco or Apriltag, like the Intel demonstration in the YouTube video below that uses Apriltag boards to calibrate 9 RealSense camera positions relative to each other.

https://www.youtube.com/watch?v=UzIfn667abE

MartyG-RealSense commented 2 years ago

Hi @lun73 Do you require further assistance with this case, please? Thanks!

lun73 commented 2 years ago

Sorry. I'm still trying to figure it out. My ability to solve this problem is not yet.

MartyG-RealSense commented 2 years ago

A further recent discussion about multiple camera pointcloud stitching is at https://github.com/IntelRealSense/librealsense/issues/10795

lun73 commented 2 years ago

I've seen it before, thanks.

MartyG-RealSense commented 2 years ago

At this point I believe that we have covered in this discussion all the possible C++ methods of calibrating cameras together and generating a combined pointcloud, unfortunately.

lun73 commented 2 years ago

It's okay, you did your best, I did my best. It's my problem, it's too hard to do the projects I want to do and I don't know much about these things.

MartyG-RealSense commented 2 years ago

Thanks very much for your understanding.

The RealSense C++ pointcloud stitching wrapper mentioned earlier in this discussion may be the closest example to what you are aiming to achieve, though it involves using MATLAB for calibration of the cameras.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/pointcloud/pointcloud-stitching

lun73 commented 2 years ago

I didn't know multi-camera calibration could be so difficult for c++.

MartyG-RealSense commented 2 years ago

Pointcloud stitching is a task that is best suited to interfacing RealSense cameras with dedicated point cloud libraries such as Open3D and PCL. Attempting to do so only with the RealSense SDK's rs2_transform_point_to_point instruction is difficult, unfortunately.

PCL - which the RealSense SDK can interface with via a compatibility wrapper - has the Normal Distributions Transform method of point cloud stitching.

https://pointclouds.org/documentation/tutorials/normal_distributions_transform.html


In Intel's white-paper guide about use of multiple RealSense cameras at the link below, they demonstrated using a tool called LabVIEW to combine pointclouds together into a single cloud.

https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration#c-aligning-point-clouds

image

The RealSense SDK has a C++-based compatibility wrapper for LabVIEW.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/labview

Whilst multiple cameras can be set up in LabVIEW, there are not instructions available for replicating the pointcloud-combining LabVIEW demo program illustrated in the white paper however.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/labview#understanding-the-programming

MartyG-RealSense commented 2 years ago

Hi @lun73 Do you require further assistance with this case, please? Thanks!

lun73 commented 2 years ago

I rely on myself to figure out how to, thank you

MartyG-RealSense commented 2 years ago

Okay, thanks very much @lun73 for the update!

MartyG-RealSense commented 2 years ago

Case closed due to no further comments received.