IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.61k stars 4.83k forks source link

Merge colored point clouds in Unity #6147

Closed droidXrobot closed 4 years ago

droidXrobot commented 4 years ago
Required Info
Camera Model D435
Firmware Version 05.12.01.00
Operating System & Version Win 10
Platform PC
SDK Version 2.0
Language C#
Segment Other

Issue Description

I have two D435 cameras and am streaming them both in Unity. I want to merge the two colored point clouds into one. Is there a way to do this?

MartyG-RealSense commented 4 years ago

Outside of Unity, a commonly used method of aligning multiple RealSense point clouds is an Affine Transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud.

I have not personally seen anyone who has done this with RealSense point clouds inside the Unity wrapper though. The link below has the clearest explanation of Affine type transformations that I could find. The post is very old but I believe that the general principles of the method detailed in it still apply.

https://stackoverflow.com/questions/5271195/how-to-change-the-affine-transform-of-an-object-dynamically-using-4x4-transforma

Unity's documentation has a page on matrices that covers rare transformation situations where changing rotation / scale / position of an object is insufficient to achieve what you are aiming for.

https://docs.unity3d.com/ScriptReference/Matrix4x4.html

wodndb commented 4 years ago

If your two D435 cameras are set at fixed location, you can do registration of pointclouds from two D435 cameras using 3D rigid transform. Calculate transform matrix with location of two D435 cameras. (Note: right IR camera of RealSense D435, between IR projector and Color camera, is origin of world coordinate in unity)

Translate transform : Matrix4x4.Translate(Vector3) Rotation transform : Matrix4x4.Rotate(Quaternion.Euler)

Iterative Closest Point(ICP) is also help to local registration of point cloud. However, there are no opened library about ICP based C#. If you can make DLL from c++ project, PCL or Open3D also good choice to merge point cloud. Make registration function in c++ project based PCL or Open3D and export to DLL. And import the DLL file to Unity.

color is very difficult. I tried to merge colored point cloud but... color is differ between two Realsense camera because angle of light source is differ. So, I recommend that add some light source around of RealSense camera to sync color of two point clouds.

MartyG-RealSense commented 4 years ago

@wodndb Thanks for your help!

MartyG-RealSense commented 4 years ago

This case will be closed after 7 days from the time of writing this message if there are no further comments. Thanks!

MartyG-RealSense commented 4 years ago

Case closed due to no further comments received.

fajarnugroho93 commented 4 years ago

If your two D435 cameras are set at fixed location, you can do registration of pointclouds from two D435 cameras using 3D rigid transform. Calculate transform matrix with location of two D435 cameras. (Note: right IR camera of RealSense D435, between IR projector and Color camera, is origin of world coordinate in unity) Translate transform : Matrix4x4.Translate(Vector3) Rotation transform : Matrix4x4.Rotate(Quaternion.Euler) Iterative Closest Point(ICP) is also help to local registration of point cloud. However, there are no opened library about ICP based C#. If you can make DLL from c++ project, PCL or Open3D also good choice to merge point cloud. Make registration function in c++ project based PCL or Open3D and export to DLL. And import the DLL file to Unity. color is very difficult. I tried to merge colored point cloud but... color is differ between two Realsense camera because angle of light source is differ. So, I recommend that add some light source around of RealSense camera to sync color of two point clouds.

Hi, could you explain more about the "registration of pointclouds" part? How do I do it in Unity? And the matrix part, how do we get the translation and rotation value? Is it different to manually moving and rotating the point cloud objects in the world scene?

MartyG-RealSense commented 4 years ago

@wodndb I believe that the above question from @fajarnugroho93 is meant for you. Thanks!

wodndb commented 4 years ago

OK. My description was very abstract. It's hard to explain these because It's actually not my field and I'm not good at English. But I'll try hard to explain how can I do registration between two point cloud from cameras. First look below image which show position and rotation of two RealSense D415 cameras.

image

When you see two point clouds in unity, camera and point cloud position is overlapped.

image

In this situation, We should translate and rotate green point clouds for registration like this image.

image

Before registration, We should check the origin point of RealSense D415. Actually origin point of D415 is not center of D415 housing. Origin point of D415 is right IR camera. You can realize this in RealSense Viewer 3D mode.

Origin of world coordinates of D415 is (0.02, 0, -0.010025). I calculate this from D400 series datasheet. So translate transform matrix is

Tc

We can get this based Matrix4x4 like this.

var Tc = Matrix4x4.Translate(new Vector3(0.02, 0, -0.010025));

And, rotation of left camera is (0, -45, 0). So rotation matrix is

R

We can get this based Matrix4x4 like this.

var R = Matrix4x4.Rotate(Quaternion.Euler(0, -45, 0)

And, relative position of left camera is (-0.5, 0, -0.2). So translation matrix is

image

We can get this based Matrix4x4 like this.

var T = Matrix4x4.Translate(new Vector3(-0.5, 0, -0.2));

Finally we can get rigid transform matrix for registration.

var transMat = Tc × R × T

So.. How to transform point cloud?

You know that point cloud is consist of vertices, colors, and normal vectors. Matrix4x4 has member function "Matrix4x4.Muiltiply3x4" So you can transfrom all of vertex in point cloud using this function.

If I have a spare time at weekend, I will post about how to import .dll to unity and how to marshaling between cpp and C#.

fajarnugroho93 commented 4 years ago

Oh wow, thank you very much for your explanation @wodndb

Okay, I get it now. But how is it different from directly translating and rotating the RsDevice in the unity scene?

I have found some guide about creating cpp .dll to Unity here https://joinerda.github.io/DLLs-And-Unity/ is it good?

Which class/method I should pay attention to in PCL about this?

wodndb commented 4 years ago

@fajarnugroho93 If you directly translating and rotating the RsDevice in the unity scene, registration may be looks like good. However, position of vertices in Mesh is not changed. So If you want to calculate volume, circle, or feature points of point clouds, it's not good to directly translating and rotating the RsDevice in the unity scene.

In my case, I should calculate volume, circle, or feature points of point cloud, and merge point clouds to a point cloud. So I use Matrix4x4 to rigid transformation.

Well, actually I don't know coordinate system of game engine in detail. Maybe there are better rigid transformation way than mine.

And about create cpp .dll, the document you linked is not good for PCL or Open3D. Solution is created by CMake, not by Visual Studio. So I recommend to searching CMake usage.

I use Open3D, So I don't know about PCL in detail. Maybe PCL has ICP example source code. and in the source, you can find what you should pay attention.

Anyway I have limitation to explain using English. Thanks to read my poor English.

fajarnugroho93 commented 4 years ago

@wodndb I see. So the relevant ICP part in Open3D should be this one http://www.open3d.org/docs/release/tutorial/Basic/icp_registration.html right? But when to call the method after we build the .dll? Should we create a new processing block?

Then, this one would be more suitable for creating the .dll https://dominoc925.blogspot.com/2016/08/use-cmake-to-help-build-and-use-windows.html or?

Thank you for your thorough explanation. And also your English is not poor at all, it's all good!

wodndb commented 4 years ago

@fajarnugroho93 Yeah, that is right example. Although ICP example is based python, usage of ICP functions are similar to cpp. When you build Open3D for Visual Studio Solution, You can find cpp examples in the solution directory.

And if your cameras doesn't move during streaming, you just run ICP once and apply transform matrix from ICP function to point cloud that you want to registration.

If point clouds are high resolution, ICP is slow. So Open3D support voxel downsample which down sampling point cloud using voxel space. And ICP can be faster using voxel down sampled point.

Actually I've never made solution for creating dll file, I got solution from coworkers in my company. So I'm not sure the way what you linked works well.

fajarnugroho93 commented 4 years ago

@wodndb I see. Okay, I will check into the ICP example.

you just run ICP once and apply transform matrix from ICP function to point cloud that you want to registration

Where exactly should I run the ICP on the Unity side?

wodndb commented 4 years ago

@fajarnugroho93 You should get two point cloud for registration. One is Source and the other is target. source point cloud is attached to target point cloud by rigid transformation. And ICP return 4x4 matrix to rigid transform.

So, ICP should be run in the source code that can be load source and target point clouds.

You can get vertices of point cloud by modifying RsPointCloudRenderer.cs in RealSense SDK for Unity.

bing-jian commented 4 years ago

@wodndb I also need to merge point clouds generated from D415 (one device but at different positions). Thanks for posting the drawings which are very helpful. I just want to make sure that my understanding of the coordinate system correct. If the (0, 0, 0) is the center of right D415 in your first drawing, then I thought the center of left D415 should be (0.5, 0, 0.2) and the origin of point cloud from the right D415 should be (-0.02, 0, 0.010025). Also the transformation ordering should be R x T x Tc, that means we first shift the XYZ values of those points so that the new origin will be the center of device, then we apply the rigid motion of the device, am I right?

wodndb commented 4 years ago

@bing-jian I reconsider the matrix what I was commented, and I realize that order of matrix product was wrong. I'm sorry to sharing wrong information...

First, I want to rotate camera, but an axis of rotation only can be center of D415. Because D415 has 1/4 inch standard mount hole in the center of D415 housing. explane Origin Point is (0,0,0) and which is right IR camera of D415. And Position of rotation axis is (-0.02, 0, 0.010025) and which is parallel to the Y axis in R^3 rectangular coordinate system. So, rotation matrix should be Rc = (Tc)^-1 x R x Tc.

Second, I want to translate camera. Translate matrix is T in my example. image

Finally, order of matrix product should be T x Rc.

If you set center position of D415 to Right IR camera in Real World, transform matrix can be more simple. It's T x R.

If my description about order of matrix product is wrong, Please comment about what is wrong and how to get correct rigid transform matrix.

Thanks.

AkhilRaja commented 3 years ago

Did you finish the solution and implement this in unity ? @bing-jian @wodndb