IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.53k stars 4.81k forks source link

how to get pointcloud from depth image? #2488

Closed anranx closed 5 years ago

anranx commented 5 years ago

Required Info how to get pointcloud from depth image?
Camera Model {D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win 10
Platform PC
SDK Version 2.16.0}
Language {C++}
Segment {Robot}

Issue Description

<Describe your issue / question / feature request / etc..> hi!I'm a beginner to study realsense d435.here are two questions I want to ask.

Q1. I download the SDK2.16.0 and the intel realsense viewer, I find there are some examples like rs_pointcloud \ rs_save to disk... these examples can show the realtime pointcloud. But my question is how can I use the depth image from intel realsense viewer to convert the pointcloud? I try the code below and get the very strange radial pointcloud, so I think maybe i am wrong. I want to read the depth image that I saved before then turn to the pointcloud like .pcd file.

include "stdafx.h"

include

include <boost/thread/thread.hpp>

include <pcl/range_image/range_image.h>

include <pcl/io/pcd_io.h>

include <pcl/visualization/range_image_visualizer.h>

include <pcl/visualization/cloud_viewer.h>

include <pcl/visualization/pcl_visualizer.h>

include <pcl/features/range_image_border_extractor.h>

include <pcl/keypoints/narf_keypoint.h>

include <pcl/console/parse.h>

include <pcl/PolygonMesh.h>

include

include <pcl/common/io.h>

include <pcl/point_cloud.h>

include <pcl/point_types.h>

include <pcl/PolygonMesh.h>

include

include

include <pcl/io/vtk_lib_io.h>

include <pcl/point_types.h>

include "math.h"

include <pcl/registration/correspondence_rejection_sample_consensus.h>

include <pcl/registration/correspondence_rejection_trimmed.h>

include <pcl/registration/icp.h>

include <opencv2\core\core.hpp>

include <opencv2\highgui\highgui.hpp>

include<pcl/io/pcd_io.h>

include<pcl/point_types.h>

pragma comment( lib, "opencv_highgui2412d.lib")

pragma comment( lib, "opencv_core2412d.lib")

typedef pcl::PointXYZRGB P_pcl; typedef pcl::PointCloud point_cloud; typedef point_cloud::Ptr ptr_cloud;

const double camera_factor = 1;
const double camera_cx = 212.559662;// the resolution is 424*240 const double camera_cy = 123.307076; const double camera_fx = 213.512985; const double camera_fy = 213.512985;

// -----Main----- int main() { //

cv::Mat img = cv::imread("E:/computer_Depth.png", 0);
//ushort d = img.ptr<ushort>(100)[100];
pcl::PointCloud<pcl::PointXYZ>::Ptr cloudPtr(new pcl::PointCloud<pcl::PointXYZ>);
for (int m = 0; m < img.rows; m++)
{
    for (int n = 0; n < img.cols; n++)
    {
        ushort d = img.ptr<ushort>(m)[n];
        if (d == 0)
        {
            continue;
        }
        pcl::PointXYZ p;
        p.z = double(d) / camera_factor;
        p.x = (n - camera_cx) * p.z / camera_fx;
        p.y = (m - camera_cy) * p.z / camera_fy;
        cloudPtr->points.push_back(p);
    }
}

pcl::visualization::CloudViewer viewer("Cloud Viewer");
viewer.showCloud(cloudPtr);
//while (1);
//  return 0;

pcl::PointCloud<pcl::PointXYZ> cloud;
cloud = *cloudPtr;

pcl::io::savePCDFileBinary("computer_Depth.pcd", cloud);
std::cout << " saved " << std::endl;
getchar();
return (0);

}

Q2. the second question is when i use the intel realsense viewer 3D form , I can save the pointcloud .ply file correctly, but when i use the code below to read this .ply file, there is mistake: NO POLYGONS.

include "pch.h"

include

include <pcl/common/io.h>

include <pcl/io/io.h>

include <pcl/point_cloud.h>

include <pcl/io/ply_io.h>

include <pcl/io/ply/ply.h>

include <pcl/console/parse.h>

include

include <pcl/point_types.h>

include <pcl/visualization/pcl_visualizer.h>

include <pcl/PolygonMesh.h>

include <pcl/io/ply_io.h>

include <pcl/io/vtk_lib_io.h>

include <pcl/io/vtk_io.h>

include

using namespace pcl; using namespace pcl::io; using namespace pcl::console; using namespace std;

int main(int argc, char** argv)

{ PolygonMesh cloud; pcl::visualization::PCLVisualizer viewer("PCL PolygonMesh"); string filename = "flower.ply"; pcl::io::loadPolygonFilePLY(filename, cloud); viewer.addPolygonMesh(cloud); viewer.setBackgroundColor(0.2, 0.0, 0.0); while (!viewer.wasStopped()) { viewer.spinOnce(); } return 0; }

My questions maybe very easy, but it still troubled me a lot. Can someone help for me ? THANKS A LOT!

AntoineWefit commented 5 years ago

But my question is how can I use the depth image from intel realsense viewer to convert the pointcloud?

Q1 : In the Viewer top right corner, the "save" icon will export the scenery to a ".ply" file. image

or in C++, you'll need something like that :

// Declarations
    rs2::pointcloud pc;
    rs2::points points;
        rs2::pipeline pipe;

// Start your pipeline somewhere around here
    auto frames = pipe.wait_for_frames();
    auto depth = frames.get_depth_frame();
    auto color = frames.get_color_frame();

    pc.map_to(color);
    points = pc.calculate(depth);
        points.export_to_ply("your_filename.ply", color);

I can save the pointcloud .ply file correctly, but when i use the code below to read this .ply file, there is mistake: NO POLYGONS.

Q2 : I think that right now, the ".ply" file generated with the Viewer contains a point cloud exclusively (no polygon). You can generate the mesh with another program once you have a point cloud.

anranx commented 5 years ago

@AntoineWefit THANKS FOR YOUR ANSWER! ! but I still have a question, the code you used below: auto frames = pipe.wait_for_frames(); auto depth = frames.get_depth_frame(); This represents the acquisition of realtime depth data, right? I want to input my depth image like .png file 3_depth I chage the Depth Visualization/ Color Scheme from Jet to White to Black. use this kind of depth image: image 2_depth the code i used below is workful for Kinect , I used the depth image from Kinect and Kinect's intrinsic. When I used the depth image from realsense viewer and changed the intrinsic, it generated a very strange radial pointcloud. `cv::Mat img = cv::imread("E:/computer_Depth.png",-1); ushort d = img.ptr(100)[100]; pcl::PointCloud::Ptr cloudPtr(new pcl::PointCloud); for (int m = 0; m < img.rows; m++) { for (int n = 0; n < img.cols; n++) { ushort d = img.ptr(m)[n]; if (d == 0) { continue; } pcl::PointXYZ p; p.z = double(d) / camera_factor; p.x = (n - camera_cx) p.z / camera_fx; p.y = (m - camera_cy) p.z / camera_fy; cloudPtr->points.push_back(p); } } pcl::visualization::CloudViewer viewer("Cloud Viewer"); viewer.showCloud(cloudPtr); pcl::PointCloud cloud; cloud = *cloudPtr;

pcl::io::savePCDFileBinary("computer_Depth.pcd", cloud); std::cout << " saved " << std::endl; getchar(); return (0);` image Is there any way to combine the realsense d435 camera intrinsic and depth images from realsense viewer to generate the pointcloud?

About Q2, i saved the .ply file using the botton you introduced and I will find some codes to generate the mesh. Thanks again!:)

AntoineWefit commented 5 years ago

This represents the acquisition of realtime depth data, right?

Yes, from a connected RealSense camera.

Is there any way to combine the realsense d435 camera intrinsic and depth images from realsense viewer to generate the pointcloud?

Not sure about that, maybe this can help you. You could generate the pointcloud directly from the Viewer or with some simple code, instead of converting pictures.

ev-mp commented 5 years ago

Hello @anranx ,

  1. The color representation of depth in realsense viewer, as captured with PNG, uses histogram equalization - a one-way transformation that makes the image "human"-readable but not applicable for depth extraction. See #2231. The "Save snapshot" button in viewer also generates RAW file (along with PNG) that preserves the actual depth data in native Z16 (uint16_t) format.

  2. To generate pointcloud from depth frame you can use rs2_deproject_pixel_to_point API that requires the following:

    • Depth RAW data (see above)
    • Depth Raw->Metric units scale factor (via get_option(RS2_OPTION_DEPTH_UNITS))
    • Depth Intrinsic You can also use pointcloud utility class, as it generates vertices for the whole frame at once at expecnce of some extra-coding on your side.

As for the Kinect-specific code - I don't have a first hand experience, but for reading a related discussion (https://communities.intel.com/message/532200#532200) I'm not sure whether Kinect's "d" stands for Cartesian or radial depth . I can only confirm that Realsense devices produce undistorted (Cartesian) depth. So you'll need to check up the math to see that the transformations align.

anranx commented 5 years ago

@AntoineWefit Thanks for your help! I change my way to get the pointcloud. Save the pointcloud from Viewer and open it in the Meshlab. Something wrong in visualization. I set the depth unit as 0.001, it means 1 meter in each depth unit. But it seems so small in the Meshlab image I try to zoom it using the mouse, but still small. When I change the depth unit to 0.0001, it means 0.1meter in each depth unit. Then I put the pointcloud into meshlab, it become bigger. image

Shouldn't the pointcloud in 0.001 depth unit is bigger than the pointcloud in 0.0001 depth unit? My question is how to set the depth unit so that the pointcloud can become normal size in meshlab? Thanks you!

AntoineWefit commented 5 years ago

I think that Meshlab is just adapting the Window to the size of your 3D model. It looks like you captured a big scenery, like an entire room. Some points are really far away which make your model kind of big. Can you specify what you are trying to capture ? (for example : an object, a person, a room...)

I would advise you to set up the Intel RealSense Viewer to capture only the desired object. For that, adjust your settings like this (for example : if you want to capture something between 30 cm and 1 m) : Controls : image Advanced Controls / Depth Table : image

anranx commented 5 years ago

Thank you for your reply! I am trying to caputure a vivo model rib bone. I know the model space is limited and the camera minimum imaging distance is 10cm, so I reduce resolution to the min 424*240 to capture the ribs. I am capture the pointcloud in this vivo model, so I dont know why produce so mush points in long distance. And actually, I dont know how to set the other parameters in such short distance. image because I will align the STL1(come from the pointcloud I captured) with STL2(come from CT reconstruction)so I want to keep the same unit. .mm is the universal unit. image I reconstruct the pointcloud in 0.001 unit and saved as .STL1, but it is too small compared to another STL 2reconstructed by CT. image

So can you suggest how to set those parameters to caputure pointcloud in condition of low resolution(424*240)and short distance(limited space in vivo model)? Or there is something wrong in the process of capturing the pointcloud?

Thank you very much!

AntoineWefit commented 5 years ago

First of all, your min distance (10 cm) is really close, even for the D435. It is going to be tricky to capture a nice pointcloud at this distance.

Secondly, since you are really close, I think that you should set your Depth Clamp Min to 0 and your Depth Clamp Max to around 30 cm. With that, you'll get rid of things beyond that distance. Try to also adjust the Disparity Shift to get a nice 3D model according to the scanning distance.

Regarding the units, I don't know if there is a way to make any change. I believe that the exported 3D models from the Viewer are set to meters unfortunately, which explain the size difference between your STLs, but I could be wrong.

RealSense-Customer-Engineering commented 5 years ago

[Realsense Customer Engineering Team Comment] @anranx Any other questions about this ticket? Looking forward to your update. Thanks!

HippoEug commented 5 years ago

Honestly, until the Realsense officially has an example which can do a full pointcloud reconstruction without manually doing it in ie Meshlab, I think the application @anranx is trying to do will be very tricky with lots of hard manual labor.

Pointcloud Reconstruction: image

RealSense-Customer-Engineering commented 5 years ago

[Realsense Customer Engineering Team Comment] @anranx Any update from your side? Can we close this ticket?

anranx commented 5 years ago

Sorry to reply. I changed the depth unit in RealSense View, it gets better now. Finally, I decide to get pointcloud from RealSense View instead of depth image. I am try to do reconstruction in meshlab or using c++ code. Anyway, thank you guys very much for advice and help! :))

HippoEug commented 5 years ago

Hi @anranx, just to show you my attempt at doing a reconstruction of a Maisto 1:18 car, taken using the Realsense D435, pointcloud captured from the Realsense Viewer application.

murcielagofrontleft murcielagofrontright murcielagobackleft murcielagobackright

It's not perfect by any means, but I hope this will give you a baseline of what you can expect

anranx commented 5 years ago

@HippoEug Thank you for your reply! It doesn't seem to work very well from the pictures. Do you have any idea to reconstruction? Or is there a better way to create pointcloud instead of RS Viewer? Maybe better pointcloud can get better reconstruction. Because my application is using the pointcloud to identify which the rib is, and then do some registration with the model from CT reconstruction. So maybe it is not necessary for me to scan the whole subject. Anyway, I can learn from you if you have better idea.

HippoEug commented 5 years ago

There is an application that is called Recfusion. I have tried it and it is perfect.. but the downside is its $99 if I recall correctly. You should try using the free version and see if it works. Make sure you download the one that is for single cameras!

HippoEug commented 5 years ago

Here is an output from the application.

This is an actual picture of my messy room: image

These are the results from Recfusion. Instead of taking a screenshot on my PC, I took a picture of my monitor using my phone, hence the terrible reflection.

image image image

I still think this should be a default feature for the Realsense. The D435 I am using is owned by my company.. but if this is a default feature for the Realsense, I might get the camera for myself as well

lucasjinreal commented 5 years ago

@hilaEG @HippoEug Where did your guys get the point cloud result. I am now getting from realsense ros publish topic from /camera/depth/color/points, but it sometimes get a lot of very long distance points, is there any way to filter it?

HippoEug commented 5 years ago

What do you mean by getting the point cloud result?

I think for the long distance points, you can try using Depth Clamp

anranx commented 5 years ago

@HippoEug hello! May I ask you a question please? Can you give me some suggestions on how to save the aligned point cloud in camera coordinate? https://forums.intel.com/s/question/0D50P0000490XLwSAM/getting-complete-point-cloud-from-d435 I fine this code is useful. At the end of this code, I try to save the point cloud using function pcl::io::savePCDFile in PCL and fuction export_to_ply in RS, but all failed.The main step is get frames from camera--align depth and color frame--deproject pixel to point in camera coordinate--save to disc, right?

lucasjinreal commented 5 years ago

@anranx I have exactly same commands here.................

goldwater668 commented 5 years ago

hi,@HippoEug, Now I have a D405 depth camera in hand, and I want to show you the above effect, can you give some steps, how to operate it, thank you!