IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.49k stars 4.8k forks source link

Intel Realsense D400 - matlab .bag 3D pointcloud #8368

Closed NickVO28 closed 3 years ago

NickVO28 commented 3 years ago

Hello dear community,

I am a student and for my master's thesis I do research on 3D scanning for a specific agricultural application. I bought an intel realsense D455. At the moment I work with limited knowledge with Matlab software, but later I would switch to ROS, but I have no knowledge about that so far.

I see that a .bag file contains 95 topi's. Which topics do I need exactly to make a 3D pointcloud? What steps, transformations do I still have to make?

In addition, are there scripts available from the community regarding stitching in matlab? I would like to reconstruct the full pointcloud of the entire recording time of the .bag file.

Thanks in advance!

Regards, Nick

MartyG-RealSense commented 3 years ago

Hi @NickVO28 As the RealSense MATLAB wrapper requires Windows 10 and MATLAB 2017b, this may be an awkward option to pursue if you do not have Windows 10 or do not have access to MATLAB 2017b.. There are options that do not involve using the RealSense MATLAB wrapper though if you are unable to meet the requirements. The resources in the link below may be useful to developing a project plan.

https://github.com/IntelRealSense/librealsense/issues/8340

It sounds as though your plan for stitching may be generating multiple point cloud ply files taken from different angles with a single camera instead of using multiple cameras. If that is the case, the discussion in the link below will hopefully be helpful.

https://community.intel.com/t5/Items-with-no-label/process-to-view-D415-live-or-captured-depth-stream-into-matlab/m-p/631971/highlight/true?profile.language=de


If your end-goal is to use ROS and you will be using a single IMU equipped camera (and the D455 does have an IMU), it may be worth considering recording a point cloud map in real-time, progressively building its detail until you are satisifed with it and then saving the resultant point cloud to file. Intel's ROS D435i SLAM tutorial can provide guidance about setting up such a system.

https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i

If stitching point clouds with ROS is your preferred approach and you are able to use more than one camera, Intel also have a tutorial for a system to stitch together point clouds from multiple camera views into a single cloud.

https://github.com/IntelRealSense/realsense-ros/wiki/Showcase-of-using-2-cameras

NickVO28 commented 3 years ago

@MartyG-RealSense , thank you for your answer. I am using matlab 2020b on windows 10. Is it exclusively for matlab 2017b? I have used pointcloud_example.m and it works, it gives me one (live pointcloud) and one static pointcloud. the readbag_example.m also works, but it only gives me one depth map of one moment during the recording.

I have a script which allows me to read all the topics that contain in the .bag file. For example: /device_0/sensor_0/Depth_0/image/data', this contains a 1x614400 struct that I need to resize to a matrix with the right dimensions. Normally for a pointcloud there should be a pattern in the struct (x,y,z, rgb). If I look to the struct of the /device_0/sensor_0/Depth_0/image/data struct then I only recognize a pattern of 2 elements. Not sure what this means or maybe I am just reading the wrong topic.

This is how I normally create a pointcloud: indPointcloud = 90; %data point pointcloudRaw = D(indPointcloud).data;

pointcloudX = reshape(pointcloudRaw(:,1,:),[2048,1088]); pointcloudY = reshape(pointcloudRaw(:,2,:),[2048,1088]); pointcloudZ = reshape(pointcloudRaw(:,3,:),[2048,1088]); pointcloudRGB = reshape(pointcloudRaw(:,4,:),[2048,1088]); ptCloud=pointCloud([pointcloudY(:),pointcloudZ(:),pointcloudX(:)]) %% Display pointcloud figure cmap = colormap('gray'); pcshow([pointcloudY(:),pointcloudZ(:),pointcloudX(:)], pointcloudRGB(:)); xlabel('X'); ylabel('Y'); zlabel('Z');

MartyG-RealSense commented 3 years ago

The RealSense MATLAB wrapper usually requires MATLAB 2017b, otherwise errors may occur. An official feature request has been filed with Intel for MATLAB 2020b support, though I do not have information regarding if / when it will be implemented.

There are methods of using RealSense data that pre-date the wrapper's existence, though these typically involve recorded data such as bag files or image files rather than live camera data.

If you are able to use the RealSense MATLAB wrapper, one of its developers provides advice in the link below about modifying line 8 of the pointcloud example to use a bag file as its data source instead of live camera data.

https://github.com/IntelRealSense/librealsense/issues/3241#issuecomment-462496753

image

MartyG-RealSense commented 3 years ago

Hi @NickVO28 Do you require further assistance with this case, please? Thanks!

NickVO28 commented 3 years ago

@MartyG-RealSense

I have succeeded to create an image and depth map out of the .bag file from my realsense D455

Script below:

Select data file (.bag) and read it

%% Load image data from file [X,O,CONN,TOPICs]=ReadBAG(fName); [D,I]=ReadBAG(X,TOPICs{49}); % color image %% Extract image indImage = 200; imageArray = D(indImage).data; width = D(indImage).width; height = D(indImage).height; colors = 3; imageArrayB = imageArray(1:3:end); imageArrayG = imageArray(2:3:end); imageArrayR = imageArray(3:3:end);

imageMatrixB = rot90(flip(reshape(imageArrayB,[width, height]), 1),3); imageMatrixG = rot90(flip(reshape(imageArrayG,[width, height]), 1),3); imageMatrixR = rot90(flip(reshape(imageArrayR,[width, height]), 1),3);

imageMatrix = uint8(zeros(height, width, colors)); imageMatrix(:,:,1) = imageMatrixB; imageMatrix(:,:,2) = imageMatrixG; imageMatrix(:,:,3) = imageMatrixR;

%% Display image figure subplot(2,3,1) image(imageMatrix);

%% Load depth from file [D2,I2]=ReadBAG(X,TOPICs{2}); %depth data

%% Extract pointcloud indDepth = indImage; depthArray = D2(indDepth).data; width = D2(indDepth).width; height = D2(indDepth).height;

depthArrayLo = depthArray(1:2:end); %per 2 depthArrayHi = depthArray(2:2:end);

depthMatrixLo = rot90(flip(reshape(depthArrayLo,[width, height]), 1),3); depthMatrixHi = rot90(flip(reshape(depthArrayHi,[width, height]), 1),3);

depthMatrixLo_U16 = uint16(depthMatrixLo(:,51:640)); depthMatrixHi_U16 = uint16(depthMatrixHi(:,51:640)); 51 is randomly chosen to filter out the incorrect depth band at the left side of the image, this must be done but you can also automate this

depthMatrix_U16 = depthMatrixLo_U16 + (2^8)*depthMatrixHi_U16;

%% Display depth subplot(2,3,4) caxis([0 255]); image(depthMatrixHi_U16,'CDataMapping','scaled'); colorbar

subplot(2,3,5) image(depthMatrixLo_U16,'CDataMapping','scaled'); colorbar

subplot(2,3,6) image(depthMatrix_U16,'CDataMapping','scaled'); colorbar


Now I still need to figure out how I can create a pointcloud out of the depth data in matlab. I have found some literature below: https://medium.com/yodayoda/from-depth-map-to-point-cloud-7473721d3f

Then next I need to stitch the pointclouds together. Don't now if it has been done yet in matlab but you have referred to an example in ROS, which I might try in the future. But if I can do everything in matlab it should be more interesting. In the RealSense MATLAB wrapper there is a matlab script with which you can stream a live pointcloud. It would be useful if there was a script that streams but also stitches the images together at the same time and that at the end you can save the obtained pointcloud as for example a .ply .

I notice that in the data there is only one RGB sensor included. Is it possible to receive data from both (left and right)?

NickVO28 commented 3 years ago

2P1jS

If I am right, I need the formula above. Is the fx and fy value equal to the focal length given in the datasheet (1.93 mm)? How do I optain cx and cy? For the extrensic parameters I may need the transformation matrix given in the topic: /device_0/sensor_1/Color_0/tf/0.

Thanks in advance.

MartyG-RealSense commented 3 years ago

Thanks very much @NickVO28 for sharing your code for the benefit of other MATLAB users in the RealSense community :)

The MATLAB documentation link below provides information about registering point clouds and stitching them using ICP.

https://uk.mathworks.com/help/vision/ug/3-d-point-cloud-registration-and-stitching.html

MartyG-RealSense commented 3 years ago

In regard to the question about getting color from infrared, you can only get color (in RGB8 format) from the left IR sensor on the D415 and D455 models. To do this on D455, firmware version 5.12.8.200 or newer should be installed. Even then it is not the same kind of color as the RGB sensor.

https://github.com/IntelRealSense/librealsense/issues/7934#issuecomment-741634631

MartyG-RealSense commented 3 years ago

For obtaining fx, fy, cx and cy, I hope the link below will be helpful:

https://github.com/IntelRealSense/librealsense/issues/2930

MartyG-RealSense commented 3 years ago

Hi @NickVO28 Do you require further assistance with this case, please? Thanks!

NickVO28 commented 3 years ago

@MartyG-RealSense I have managed to produce a pointcloud out of the depth data. I have used the camera instrinsics that I have found in the "calibration data" in the realsense viewer: image May I assume these numbers are correct? The pointcloud I have created looks like this: image You notice that the entire pointcloud is rotated around two axles with respect to the reference coordinate system, while I have aimed the camera approximately perpendicular to the wall. Is there a way to correctly align the wall in the pointcloud with the XZ-plane from the camera data? (without measuring after) I have tried to use the transformation data, which contains rotation and translation matrixes but booth seem to be empty and there is only 1 value for both over the entire time interval.

MartyG-RealSense commented 3 years ago

The calibration values obtained in the RealSense Viewer from my D455 (shown in the image below) are in similar ranges to those in the image that you provided, so it looks as though your calibration is okay.

image

I wonder whether the aligned pointcloud script for the MATLAB wrapper in the link below may provide a solution for a depth-to-color aligned pointcloud that renders correctly if you have not seen this link already

https://community.intel.com/t5/Items-with-no-label/MATLAB-RGB-point-cloud/m-p/584724#M10956

MartyG-RealSense commented 3 years ago

Hi @NickVO28 Do you require further assistance with this case, please? Thanks!

MartyG-RealSense commented 3 years ago

Case closed due to no further comments received.

NickVO28 commented 3 years ago

Hi @MartyG-RealSense ,

In the meantime I have come a bit further with the stitching of point clouds. Unfortunately, no definitive solution yet that delivers permanently robust results, but we are working towards it. I still have the problem that my pointcloud and RGB data are not correctly aligned with each other. In the image below you can already see this clearly from the RGB and depth images. The RGB image is slightly shifted to the left. coloralign uncoloredptcloud

I have already seen several solutions on this forum but no solution in matlab for a .bag file yet. The ability for full offline processing is a must. If I am correct, the linrealsense align.m function is for online processing, so it is not really useful for me.

Apparently the rs-convert tool provides a suitable solution if the .bag file is converted to .ply files. The rgb images seem correctly aligned with the pointcloud at first sight. But I find this is a time consuming conversion and it is very difficult to automate through the cmd.exe.

I hope for a solution that will allow me to fix the alignment without being connected to the camera and completely in a matlab environment from a .bag file.

Thanks in advance.

Greetings, Nick

MartyG-RealSense commented 3 years ago

I went back to the start of this case to seek fresh insights. I noted the statement " I would like to reconstruct the full pointcloud of the entire recording time of the .bag file".

This made me wonder if you would like to extract each individual frame of a camera stream as a separate bag. I knew that this had been done in the past with C++ code and tracked down the reference to it.

https://github.com/IntelRealSense/librealsense/issues/2588#issuecomment-453757755

You then have the problem of stitching all the individual bag files into a single large bag for loading into MATLAB though. One person took the approach of merging the bags in ROS.

https://answers.ros.org/question/206851/merged-splitted-bagfiles/

NickVO28 commented 3 years ago

Hi @MartyG-RealSense I am currently recording a measurement via the realsense viewer with one .bag file as output for the entire measurement. A bag file is a format that is often used for playing back a measurement and contains all the data required for this. When I open the .bag file in matlab the data looks like this: image For this example I recorded both RGB and depth at 30fps. So data from both sensors is a 1x478 struct with both a height and width depending on the resolution. This struct must then be sized to the correct dimensions and scaled to create a depth image (as seen in previous post). The depth matrix + camera instrinsics maths = pointcloud I will do this for every single frame and put it in a cell array {i} with i=frame number

This is how I stitch the color and depth for every pointcloud: imAll{i}= im2double(imageMatrixAll{i}); JAll{i} = imresize(imAll{i},size(xgridAll{i})); %resize image if resolution is different PtcloudAll{i}=pointCloud(pcloudAll{i},'color',JAll{i});

Both inputs pcloudAll{i} (contains matrices of every pointcloud frame) and imAll{i} (cointains every rgb color frame) have to be the same size for this. Unfortunately, the result is a pointcloud whose RGB data is slightly shifted to the left (which you can see in my last post), so the alignment is not so good.

For stitching I use the IMU data and icp algorithm for matlab and this works for the pointcloud, but due to the incorrect alignment of the rgb color data per pointcloud, the stitching of the colors is not good for the entire 3D map. So the map seems wrong because of the colors that are no longer correct, but it is indeed well made.

So far everything works fine, except for the RGB data, which is not correctly aligned with the point cloud, which I think is because both use a different camera sensor. The data is already arriving incorrectly from my realsense D455 before I even open it in matlab. So there should be some kind of correction.

MartyG-RealSense commented 3 years ago

The D455 can stream RGB8 color from the left infrared channel if the camera has firmware 5.12.8.200 or newer installed. Because the depth and infrared are produced by the same sensor, the color from IR will be perfectly aligned with the depth map

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-use-the-left-color-camera

Whilst RGB8 color from left IR does not look the same as RGB color, it might be an acceptable compromise for your project if the color data has the same origin as the depth..

NickVO28 commented 3 years ago

It is indeed true that the RGB8 images from the left sensor are perfectly aligned with the depth data. Maybe it could be an acceptable solution. I do notice that the quality of the colors is very much dependent on the lighting conditions.

If I am correct, I am also not allowed to look in direct sunlight, or I must indicate an area of interest.

MartyG-RealSense commented 3 years ago

Yes, you should avoid pointing the camera directly at the sun if possible, otherwise the IR sensor may become saturated and lead to a freeze or degradation of the image. The auto-exposure can recover the image if the camera view is moved out of the line of sight of the sun or the lenses are covered over.

Guidance about camera performance in sunlight can be found in Intel's camera tuning white-paper document.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-use-sunlight-but-avoid-glare

Negative effects from glare can be greatly reduced by applying a linear polarization filter over the top of the lenses on the outside of the camera. This is discussed in Section 4.4 When to use polarizers and waveplates of Intel's white-paper about physical optical filters.

https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400#section-4-the-use-of-optical-filters

image

NickVO28 commented 3 years ago

Hi @MartyG-RealSense ,

I should have create a Matlab code which allows me to stitch all the pointclouds together for a full 360° 3D-model. I first transform every point cloud with the IMU data to the first frame described in a reference frame (world frame) and then apply ICP for the last corrections (drift) to align te pointclouds.

Sadly my IMU data of my RealSense D455 is so wrong that my algoritm fails. I have calibrated te IMU serveral times with de rs-calibrate tool in python and have written the calibration data to the device.

When I do a measerment on a flat table these are my results: stabiel

Notice there is a part in the plot where the IMU doesn't take new samples anymore. This is something I first had at a sample rate of 250 Hz, and not at 63 Hz but after doing a hardware reset and doing the calibrations again I am having the problem at 63 Hz too. This leads to errors when integrating te accelerations.

Notice there is some drift by the x- ad y-axis, while I am not touching the camera during the measurement. Again when integrating this leads to an error.

The error is quite small (0,2 m) but when I am actively moving the camera this leads to huge errors. For example I walk in a circle with a radius of 3 meter. When I walked around the circle once the integrated data tells me I have moved 15 m by x-axis and 20 m by y-axis. Even Z-axis (pointing down) shows translations but this should be max. 1 meter or less. (All accelerations are given in world frame since I have rotated them with the orientation data from fusing accelerometer and gyro to determine euler angles -> rotation matrices). no low pass filter

I kind of got to a dead end and I was hoping you could help with this.

Thanks in advance.

MartyG-RealSense commented 3 years ago

I have not heard before of IMU data being used to stitch multiple point clouds together into a single 360 degree cloud. A more common approach would be to use an affine transform - setting the position and rotation of all the clouds to the same 3D space and appending them together.

The link below describes performing an affine transform on a point cloud in MATLAB using the pctransform command.

https://uk.mathworks.com/help/vision/ref/pctransform.html

Alternatively, the ICP approach in MATLAB is described here:

https://uk.mathworks.com/help/vision/ref/pcregistericp.html https://www.mathworks.com/help/vision/ug/3-d-point-cloud-registration-and-stitching.html

NickVO28 commented 3 years ago

Hi @MartyG-RealSense

That is exactly the way I am doing it right now. The problem is the IMU is not giving ground truth information. De gyroscope readings are quite accurate, even when integrating to euler angles but how accurate are the accelerometer readings?

I notice when I double integrate the accelerometer data to obtain position change there is a huge error. When the ground truth is sometimes 2 meters of change the accelerometer gives me 30 meters after double integrating. This principle is called dead-rackoninng.

MartyG-RealSense commented 3 years ago

A problem associated with IMU data is that it is inherently noisy. The SDK's rs-motion IMU example program for C++ compensates for this by allowing the balance between gyro and accel to be adjusted via an 'alpha value' to compensate for the sensitivity of accel by weighting it towards the more stable gyro.

https://github.com/IntelRealSense/librealsense/tree/master/examples/motion

image

I do not know of a similar mechanism that is built into MATLAB. As you have been using the RealSense MATLAB wrapper on Windows 10 though, I would speculate that you could likely access such functionality by creating it in a MATLAB wrapper script (since the wrapper's scripting is almost 1:1 with the SDK's C++ instructions).