isl-org / Open3D

Open3D: A Modern Library for 3D Data Processing
http://www.open3d.org
Other
11.24k stars 2.28k forks source link

[Question] denoising point cloud and mesh reconstruction #410

Closed ttsesm closed 6 years ago

ttsesm commented 6 years ago

Hi guys,

After solving the issue with the linking in the mex script. I am able to load and visualize my point using the following script:

#include "mex.h"

// cpp system headers
#define _USE_MATH_DEFINES
#include <cmath>
#include <iostream>
#include <vector>
#include <fstream>
#include <memory>

// open3D headers
#include <Eigen/Dense>
#include <Core/Core.h>
#include <IO/IO.h>
#include <Visualization/Visualization.h>

using namespace Eigen;
using namespace std;
using namespace three;

template<typename... Args>       
void vprintf(const char *format, Args... args) {
    mexPrintf(format, args...);
}
template<typename... Args>
void PrintInfo(const char *format, Args... args) {
    vprintf(format, args...);
}
template<typename... Args>
void PrintDebug(const char *format, Args... args) {
    vprintf(format, args...);
}

void PrintPointCloud(const three::PointCloud &pointcloud)
{
//  using namespace three;
    bool pointcloud_has_normal = pointcloud.HasNormals();
    PrintInfo("Pointcloud has %d points.\n",
            (int)pointcloud.points_.size());

    Eigen::Vector3d min_bound = pointcloud.GetMinBound();
    Eigen::Vector3d max_bound = pointcloud.GetMaxBound();
    PrintInfo("Bounding box is: (%.4f, %.4f, %.4f) - (%.4f, %.4f, %.4f)\n",
            min_bound(0), min_bound(1), min_bound(2),
            max_bound(0), max_bound(1), max_bound(2));

//  for (size_t i = 0; i < pointcloud.points_.size(); i++) {
//      if (pointcloud_has_normal) {
//          const Eigen::Vector3d &point = pointcloud.points_[i];
//          const Eigen::Vector3d &normal = pointcloud.normals_[i];
//          PrintDebug("%.6f %.6f %.6f %.6f %.6f %.6f\n",
//                  point(0), point(1), point(2),
//                  normal(0), normal(1), normal(2));
//      } else {
//          const Eigen::Vector3d &point = pointcloud.points_[i];
//          PrintDebug("%.6f %.6f %.6f\n", point(0), point(1), point(2));
//      }
//  }
    mexPrintf("End of the list.\n\n");
}

/* The gateway function */
void mexFunction(
     int          nlhs,
     mxArray      *plhs[],
     int          nrhs,
     const mxArray *prhs[]
     )
{

      /* Check for proper number of arguments */

      if (nrhs != 3) 
      {
        mexErrMsgIdAndTxt("MATLAB:mexcpp:nargin",
            "pointCloud2Mesh requires 3 input arguments, the pointCloud points, the corresponding color and the albedo map!!!");
      }

      /* check that number of columns in first and second input argument is 3 */
      if (mxGetN(prhs[0]) != 3) {
          mexErrMsgIdAndTxt("MyToolbox:arrayProduct:not3Columns", "Point cloud input must have 3 columns.");
      }
      if (mxGetN(prhs[1]) != 3) {
          mexErrMsgIdAndTxt("MyToolbox:arrayProduct:not3Columns", "Normals must have 3 columns.");
      }

      /* check that number of columns in the third input argument is 1 */
      if (mxGetN(prhs[2]) != 1) {
          mexErrMsgIdAndTxt("MyToolbox:arrayProduct:not3Columns", "Albedo map must have 1 column.");
      }

      SetVerbosityLevel(VerbosityLevel::VerboseAlways);

      /* Get sizes */
      mwSize nRows = mxGetM(prhs[0]);
      mwSize nCols = mxGetN(prhs[0]);

      /* Read input data  */
      MatrixXd pnts = Map<MatrixXd>( mxGetPr(prhs[0]), nRows, nCols );
      MatrixXd pntsColor = Map<MatrixXd>( mxGetPr(prhs[1]), nRows, nCols );
      double* pntsAlbedo = mxGetPr(prhs[2]); 

      PointCloud pointcloud;
      for(int i = 0; i < nRows; i++){
          pointcloud.points_.push_back(Eigen::Vector3d(pnts(i,0), pnts(i,1), pnts(i,2)));
          pointcloud.colors_.push_back(Eigen::Vector3d(pntsColor(i,0), pntsColor(i,1), pntsColor(i,2))); // values need to be between 0-1
      }

      PrintPointCloud(pointcloud);

      std::shared_ptr<PointCloud> pointcloud_ptr(new PointCloud);
      *pointcloud_ptr = pointcloud;

      // normal estimation
      EstimateNormals(*pointcloud_ptr, three::KDTreeSearchParamRadius(0.05));
      mexPrintf("Normals estimated!!!\n\n");

      OrientNormalsToAlignWithDirection(*pointcloud_ptr, Eigen::Vector3d(0.0, 0.0, 1.0));

      DrawGeometries({pointcloud_ptr}, "Open3D PointCloud", 1920, 1080);

//       Visualizer visualizer;
//    std::shared_ptr<PointCloud> pointcloud_ptr(new PointCloud);
//    *pointcloud_ptr = pointcloud;
//    pointcloud_ptr->NormalizeNormals();
//    BoundingBox bounding_box;
//    bounding_box.FitInGeometry(*pointcloud_ptr);
//       
// //       // test normal estimation
// //       EstimateNormals(*pointcloud_ptr, three::KDTreeSearchParamRadius(0.05));
// 
//    visualizer.CreateWindow("Open3D", 1600, 900);
//    visualizer.AddGeometry(pointcloud_ptr);
//    visualizer.Run();
//    visualizer.DestroyWindow(); 
      mexPrintf("Exiting.\n");
}

where I am passing the points and the corresponding color and albedo values for each point (extracted with kinect v2). Then I am assigning the points and the color to the Open3D PointCloud structure and I am able to show the point cloud in the visualizer after I have estimated and oriented the normals. Now I would like to create the mesh from the point cloud but before that, if you notice the point cloud there are some noisy points that I would like to get rid of, see the image below:

image

Thus, I would like to ask if there is a corresponding function that I can call for this. I couldn't find something relevant at least to the files that I've looked in, though I might have missed it. Another idea that came to my mind is whether down sampling could be used for this purpose or not?

Second question is, how I can reconstruct the mesh out of the point cloud, is there any C++ tutorial that I can follow? I've seen this discussion here https://github.com/IntelVCL/Open3D/issues/270 however I am not sure if the answer there still stands. Also is it possible that I use TSDF for this since I do have the depth and rgb images as well as the intrinsic information of the camera.

and last question as I said I am also passing the abledo value of each point thus is it possible after I create the mesh to assign to each created face a corresponding albedo value as an interpolation of the albedo values from the vertices that create the triangle or any other way?

Thanks.

You can find an example point cloud here if that helps: ptCloud1.zip

qianyizh commented 6 years ago

We don't have meshing from point cloud in the current version. But it has been prioritized and will be added to the next version.

As for your specific use case, if the input is a depth image, there is a quick hack that can create a mesh from it: just connect the neighboring pixels into quads (@syncle actually we should probably provide such functions for the sake of visualization). You can assign albedo values to the points and the output should be a colored mesh.

However, we don't suggest directly creating mesh from depth images. The main reason is that depth images are usually very noisy. So we provide the reconstruction system. In order to get a high quality mesh, you should run through the reconstruction and get a (roughly) colored mesh with high quality geometry. If you want to further improve the color, color map optimization #339 will help.

As for camera parameters, according to libfreenect2, Kinect v2 has:

float cx = 254.878f;
float cy = 205.395f;
float fx = 365.456f;
float fy = 365.456f;

Hope this is helpful. (@syncle we should probably also provide default parameters for other sensors. Currently we only have primesense.)

syncle commented 6 years ago

There is a corresponding function for noise removal.

We don't have it yet. The plausible approach is that for each point, find out nearest neighbor within reasonable radius size. If there is no neighboring point, it could be regarded isolated point. This tutorial would help http://open3d.org/docs/tutorial/Basic/kdtree.html.

Another idea that came to my mind is whether down sampling could be used for this purpose or not?

It is not. The downsampling can reduce density of point cloud using regular grids, but it essentially does not eliminate the point cloud.

Second question is, how I can reconstruct the mesh out of the point cloud, is there any C++ tutorial that I can follow?

RGBD Image to mesh is easy. Please check http://open3d.org/docs/tutorial/Advanced/rgbd_integration.html. As @qianyizh mentioned, reconstruction system is also useful if you have sequence of color-depth images. Please check http://open3d.org/docs/tutorial/ReconstructionSystem/index.html. I will make a new PR for adding Kinect2 parameters.

and last question as I said I am also passing the abledo value of each point thus is it possible after I create the mesh to assign to each created face a corresponding albedo value as an interpolation of the albedo values from the vertices that create the triangle or any other way?

TSDF integration can integrate intensities to produce colored meshes. You can replace color channel with albedo values to extract a mesh with albedo.

ttsesm commented 6 years ago

@qianyizh @syncle thank you both for the feedback. I will use the TSDF integration then in order to create the volume given the rgb and depth images (and from there I will extract the triangulated mesh).

I am trying now to test some code for it, first I tried to create the point cloud and compare it with the one that I have available but for some reason the output seems to be skewed as you can see below:

image

The code I am using is the following:

Image color_image_8bit;
ReadImage("rgb_img1.png", color_image_8bit);
PrintDebug("RGB image size : %d x %d\n", color_image_8bit.width_, color_image_8bit.height_);

Image depth_image_16bit;
ReadImage("depth_img1.png", depth_image_16bit);
PrintDebug("Depth image size : %d x %d\n", depth_image_16bit.width_, depth_image_16bit.height_);

auto rgbd = CreateRGBDImageFromColorAndDepth(color_image_8bit, depth_image_16bit, 1000.0, 4.0, false);

PinholeCameraIntrinsic camera;
//       camera.SetIntrinsics(512, 424, 254.878, 205.395, 365.456, 365.456); // kinect .v2 - color2depth
 camera.SetIntrinsics(1920, 1080, 975.7193, 545.9533, 1059.9718, 1059.9718); // kinect .v2 - depth2color

auto pointcloud_ptr = CreatePointCloudFromRGBDImage(*rgbd, camera);
DrawGeometries({pointcloud_ptr}, "PointCloud from Depth Image", 1920, 1080);

any idea what could be wrong?

Notice that I am using a depth2color mapping, I am not sure whether there is some kind of transformation that I need to take into account.

Below the images: mat2open3D.zip

ttsesm commented 6 years ago

@syncle how do I set the camera position? because I think this is my problem in the above case with the skewed points.

ttsesm commented 6 years ago

@qianyizh @syncle I tried to play with the transformation function but it seems that only the rotation affects the output, translation doesn't seem to do anything:

const Eigen::MatrixXd transformation = (Eigen::MatrixXd(4,4)<<
      1.0,  0.0, 0.0, 960.0,
      0.0, -1.0, 0.0, 540.0,
      0.0, 0.0, -1.0, 2155.0,
      0.0,  0.0,  0.0, 1.0).finished();

pointcloud_ptr->Transform(transformation);

I tried also other kinect images from other datasets and it seems that the skew effect shows there as well. Then I tried to load an pair from your TestData folder and this shows up correctly with the corresponding intrinsics. Thus, most likely there is something I am missing here.

syncle commented 6 years ago

I tried to play with the transformation function but it seems that only the rotation affects the output, translation doesn't seem to do anything:

That's because the visualization class automatically sets the bounding box of the object and adjust the view point accordingly. If you visualize the single and translated geometry, you will see the same thing.

I am looking your depth-color image pair. Hope sharing my findings soon.

syncle commented 6 years ago

Try use

PinholeCameraIntrinsic(1920, 1080, 1059.9718, 1059.9718, 975.7193, 545.9533)

Instead of

PinholeCameraIntrinsic(1920, 1080, 975.7193, 545.9533, 1059.9718, 1059.9718)

You almost got it right, but the principal point and focal length were not placed in the correct position. With proper value I got the following image:

screen shot 2018-06-28 at 1 17 59 pm screen shot 2018-06-28 at 1 22 32 pm

Question: Do you think this is a standard (or factory value) of Kinect2 intrinsic parameter? If necessary, I would need to add this preset parameter for the convenience.

ttsesm commented 6 years ago

oohh come on, seriously. I was so sure that I have put them correctly. It works here as well now, many thanks.

Yes, I think that you can use these intrinsics as the standard ones for the depth2color mapping (color2depth is already added as I've seen). These were the numbers that I got out of the box from the sensor. I applied a manual calibration as well and the numbers were the following:

PinholeCameraIntrinsic(1920, 1080, 1076.24555000359, 1077.8051050571, 934.512032131513, 531.80880278762); so as you can see not that big difference.


Regarding the extraction of the volume and consequently the mesh, the following worked like a charm:

      Eigen::Matrix4d extrinsics = Eigen::Matrix4d::Identity();
      volume.Integrate(*rgbd, camera, extrinsics);

      auto pcd = volume.ExtractPointCloud();
      auto mesh = volume.ExtractTriangleMesh();
      auto voxel = volume.ExtractVoxelPointCloud();

Three quick questions though are:

  1. if I want to downsample the number of triangles is it possible to do it straight forward on the mesh or I need to do it first on the RGBDImage extraction (I think I've seen such a function or something similar)?

  2. is it possible to estimate and/or reorient the normals of the vertices and faces of the mesh or this needs to be done also before I compute the volume? I noticed that for the point clouds I can apply the following:

// normal estimation
EstimateNormals(*pointcloud_ptr, three::KDTreeSearchParamRadius(0.05));
OrientNormalsToAlignWithDirection(*pointcloud_ptr, Eigen::Vector3d(0.0, 0.0, 1.0));

would something similar work for the mesh, I think mesh->compute_vertex_normals would work for the normals estimation but is there something for the orientation?

  1. and last I have my albedo image as a one channel float image [0-1], thus what is the proper way to apply on the faces. I noticed that there is a mesh->vertex_colors_ but didn't see anything related to face color. I know that I can paint them according to the http://open3d.org/docs/tutorial/Basic/mesh.html. However, this gives a uniform colouring. Can I actually replace the rgb image in the CreateRGBDImageFromColorAndDepth() with the albedo image though if this works will provide the albedo values on the vertices and not on the faces.

Sorry for that many questions but I am trying to learn the library and there are many things :-).

syncle commented 6 years ago

if I want to downsample the number of triangles is it possible to do it straight forward on the mesh or I need to do it first on the RGBDImage extraction (I think I've seen such a function or something similar)?

We don't have a method that downsamples mesh directly. However, you may increase the unit voxel size used for the integration. For example, increase the value voxel_length = 4.0 / 512.0. 4.0 / 512.0 means 512 cubics in 4m length. If you increase it to 5.0 / 512.0 or something depending on your needs. This will produce a coarser mesh.

would something similar work for the mesh, I think mesh->compute_vertex_normals would work for the normals estimation but is there something for the orientation?

Triangle mesh already has orientation. For example the ordering of the three vertex indices implies clock-wise or counter clock wise vertex ordering indicating triangle orientation. Point cloud does not have orientation, so there are some functions like OrientNormalsToAlignWithDirection. To re-orient the mesh normal direction, this means the vertex ordering also need to be changed. To my knowledge, there is no implementation for this purpose yet.

Can I actually replace the rgb image in the CreateRGBDImageFromColorAndDepth() with the albedo image though if this works will provide the albedo values on the vertices and not on the faces.

Yes. I think you can create a [0,1] float type image, and use it for making RGBD image (or replace RGBDImage.color_ with your image). If you integrate that RGBImage, you will get a mesh with albedo.

ttsesm commented 6 years ago

@syncle thanks a lot for the feedback. Increasing the unit voxel worked fine ;-).

For the triangle normals I think I can work with what is extracted from the mesh itself. I am trying now the albedo thing.

Can you tell me whether I can retrieve the triangle colors from the mesh or only the vertex colors. I've seen only the mesh->vertex_colors_ (by the way what is the type of it, is it Eigen::Vector3i) nothing related to mesh->triangle_colors_, I guess this is not there yet. Is anyhow the voxel->colors_ related?

syncle commented 6 years ago

mesh->vertex_colors_ is Eigen::Vector3d type. By now, Open3D does not have mesh->triangle_colors_.

voxel->colors_ is used for making mesh->vertex_colors_, but I don't think the values are not directly assigned like one to one. It is because the single voxel does not mean the single vertex in the output mesh. It is rather weighted color value depending on the output shape. @qianyizh, Can you confirm?

ttsesm commented 6 years ago

@syncle I see, thanks for feedback. I think I can deal with a workaround that I have in mind regarding the mesh->triangle_colors_ for the moment.

Now regarding the albedo image and mesh, I am successfully creating an RGBDImage using the albedo map image instead of the color image however the volume::Integrate() creation will not work since it expects a color image with 3 channels and 1 bytes_per_channels. The code snippet that I am using is the following:

      Image depth_image_16bit;
      ReadImage("depth_img1.png", depth_image_16bit);
      PrintDebug("Depth image size : %d x %d x %d\n", depth_image_16bit.width_, depth_image_16bit.height_, depth_image_16bit.num_of_channels_);

      Image albedo_map;
      ReadImage("albedo_img1.png", albedo_map);
      PrintDebug("Albedo image size : %d x %d x %d\n", albedo_map.width_, albedo_map.height_, albedo_map.num_of_channels_);

      auto albedo_map_ptr = ConvertDepthToFloatImage(albedo_map, 10000.0, 3.0);

      ScalableTSDFVolume volume_albedo(10.0 / 512.0, 0.04, true);
      auto rgbd_albedo = CreateRGBDImageFromColorAndDepth(*albedo_map_ptr, depth_image_16bit, 1000.0, 3, false);      

      PinholeCameraIntrinsic camera;
      camera.SetIntrinsics(1920, 1080, 1076.24555000359, 1077.8051050571, 934.512032131513, 531.80880278762); // kinect .v2 - depth2color

      const Eigen::MatrixXd extrinsics = (Eigen::MatrixXd(4,4)<<
      1.0,0.0,0.0,0.0,
      0.0,-1.0,0.0,0.0,
      0.0,0.0,-1.0,0.0,
      0.0,0.0,0.0,1.0).finished();

//      auto pointcloud_ptr = CreatePointCloudFromRGBDImage(*rgbd_albedo, camera, extrinsics);
//    DrawGeometries({pointcloud_ptr}, "PointCloud from Depth Image", 1920, 1080);

      volume_albedo.Integrate(*rgbd_albedo, camera, extrinsics);

      auto mesh_albedo = volume_albedo.ExtractTriangleMesh();

      WriteTriangleMesh("mesh_test1.ply",*mesh_albedo);

and here is what the ScalableTSDFVolume::Integrate() expects as input and the reason I am not able to get the mesh output: https://github.com/IntelVCL/Open3D/blob/35be432d45bf47bbcdf920c62d7d9d446ac6d36d/src/Core/Integration/ScalableTSDFVolume.cpp#L57

void ScalableTSDFVolume::Integrate(const RGBDImage &image,
        const PinholeCameraIntrinsic &intrinsic,
        const Eigen::Matrix4d &extrinsic)
{
    if ((image.depth_.num_of_channels_ != 1) ||
            (image.depth_.bytes_per_channel_ != 4) ||
            (image.depth_.width_ != intrinsic.width_) ||
            (image.depth_.height_ != intrinsic.height_) ||
            (with_color_ && image.color_.num_of_channels_ != 3) ||
            (with_color_ && image.color_.bytes_per_channel_ != 1) ||
            (with_color_ && image.color_.width_ != intrinsic.width_) ||
            (with_color_ && image.color_.height_ != intrinsic.height_)) {
        PrintWarning("[ScalableTSDFVolume::Integrate] Unsupported image format. Please check if you have called CreateRGBDImageFromColorAndDepth() with convert_rgb_to_intensity=false.\n");
        return;
    }
....
....
}

Thus, I wanted to ask whether you can think any workaround to that. The workaround I thought was that I could create the volume with the original rgb image, that works without issues, and for getting the corresponding vertex colors I should create the corresponding point cloud with the CreatePointCloudFromRGBDImage(*rgbd_albedo, camera, extrinsics). This one seems to take as input 1 channel double image, https://github.com/IntelVCL/Open3D/blob/35be432d45bf47bbcdf920c62d7d9d446ac6d36d/src/Core/Geometry/PointCloudFactory.cpp#L65

However, though the output should be a 3 channel double image with the same albedo value in all channels I am getting a different value instead. I tried to reproduce the values by myself based on the code from the function above and in this case I am able to get the correct values, for example:

auto rgbd_albedo = CreateRGBDImageFromColorAndDepth(*albedo_map_ptr, depth_image_16bit, 1000.0, 3, false);
      auto rgbd_albedo_color = rgbd_albedo->color_;
      auto rgbd_albedo_depth = rgbd_albedo->depth_;

      double scale = (sizeof(float) == 1) ? 255.0 : 1.0;
//       cout << scale << endl;

      int NC = 1;
      for (int i = 0; i < 10/*rgbd_albedo_color.height_*/; i++) {
          float *p = (float *)(rgbd_albedo_depth.data_.data() + i * rgbd_albedo_depth.BytesPerLine());
          float *pc = (float *)(rgbd_albedo_color.data_.data() + i * rgbd_albedo_color.BytesPerLine());
//         cout << *pc << endl;
          for (int j = 0; j < 2/*rgbd_albedo_depth.width_*/; j++, p++, pc += 1) {
//               if (*p > 0) {
//                   pointcloud->colors_.push_back(Eigen::Vector3d(pc[0], pc[(NC - 1) / 2], pc[NC - 1]) / scale);
//                   Eigen::Vector3d vec = (Eigen::Vector3d(pc[0], pc[(1 - 1) / 2], pc[1 - 1]) / scale);
                  cout << pc[0] / scale << " :: " << pc[(NC - 1) / 2] /scale << " :: " << pc[NC - 1] /scale << endl;
//                   cout << vec << endl;
//             }
        }
      }

I consider NC as 1 based on the code here:

https://github.com/IntelVCL/Open3D/blob/35be432d45bf47bbcdf920c62d7d9d446ac6d36d/src/Core/Geometry/PointCloudFactory.cpp#L133

Could be easy to check whether you are getting similar result or if you have any idea about what I am doing wrong?

Below the albedo and depth images, in case you want to try it. Be aware that the scale of the albedo image is 10000.0. albedo_depth_imgs.zip

ttsesm commented 6 years ago

@syncle consider the problem solved the issue is on the wrapper that send me back the values for some reason modifies them, otherwise wise Open3D correctly provides back what is written in the source code.

I will close this issue now, hopefully this thread might be useful for others.

Thanks again for your support :+1: and your time :-).

ttsesm commented 6 years ago

@syncle ok it seems that my idea of getting the corresponding albedo color of the mesh vertices from point cloud it does not work, since my albedo image as I said before is double [0-1]. Any idea how I can attach the albedo values on the mesh?:

     auto rgbd = CreateRGBDImageFromColorAndDepth(color_image_8bit, depth_image_16bit, 1000.0, 3, false); // works fine!!!!
      auto rgbd_albedo = CreateRGBDImageFromColorAndDepth(*albedo_map_ptr, depth_image_16bit, 1000.0, 3, false); // works fine!!!

      volume.Integrate(*rgbd, camera, extrinsics); // works fine!!!!
      auto mesh = volume.ExtractTriangleMesh(); // works fine!!!!
      volume_albedo.Integrate(*rgbd_albedo, camera, extrinsics); // does not work!!!!

      auto pointcloud_ptr = CreatePointCloudFromRGBDImage(*rgbd_albedo, camera, extrinsics); // works 

however the pointcloud_ptr->vectices_ is different from mesh->vertices_. I actually thought I could map these two so that I could use pointcloudptr->colors as the assigned albedo values

Also I noticed that my mesh->vertex_colors_ is returned as double range [0-1] can I simply scale it back to uint8 [0-255]?

syncle commented 6 years ago

Hi @TheodoreT, I think I can answer better with minimal code and data snippet. Can you share it with me?

ttsesm commented 6 years ago

Hi @syncle sure, this should do the job:

// cpp system headers
#define _USE_MATH_DEFINES
#include <cmath>
#include <iostream>
#include <vector>
#include <fstream>
#include <memory>

// open3D headers
#include <Eigen/Core>
#include <Eigen/Dense>
#include <Core/Core.h>
#include <IO/IO.h>
#include <Visualization/Visualization.h>

using namespace Eigen;
using namespace std;
using namespace three;

int main()
{
    cout << "Hello Open3D!!!!" << endl;

    SetVerbosityLevel(VerbosityLevel::VerboseAlways);

        Image color_image_8bit;
    ReadImage("C:/dev/Open3D_v2/mat2open3D/rgb_img1.png", color_image_8bit);
    PrintDebug("RGB image size : %d x %d x %d\n", color_image_8bit.width_, color_image_8bit.height_, color_image_8bit.num_of_channels_);

    Image depth_image_16bit;
    ReadImage("C:/dev/Open3D_v2/mat2open3D/depth_img1.png", depth_image_16bit);
    PrintDebug("Depth image size : %d x %d x %d\n", depth_image_16bit.width_, depth_image_16bit.height_, depth_image_16bit.num_of_channels_);

    Image albedo_map;
    ReadImage("C:/dev/Open3D_v2/mat2open3D/albedo_img1.png", albedo_map);
    PrintDebug("Albedo image size : %d x %d x %d\n", albedo_map.width_, albedo_map.height_, albedo_map.num_of_channels_);

    auto albedo_map_ptr = ConvertDepthToFloatImage(albedo_map, 10000.0, 3.0);
    PrintDebug("Albedo ptr image size : %d x %d x %d\n", albedo_map_ptr->width_, albedo_map_ptr->height_, albedo_map_ptr->num_of_channels_);

    ScalableTSDFVolume volume(10.0 / 512.0, 0.04, true); 
    ScalableTSDFVolume volume_albedo(10.0 / 512.0, 0.04, true);
    auto rgbd = CreateRGBDImageFromColorAndDepth(color_image_8bit, depth_image_16bit, 1000.0, 3, false); // this works!!!!
    auto rgbd_albedo = CreateRGBDImageFromColorAndDepth(*albedo_map_ptr, depth_image_16bit, 1000.0, 3, false); // this works!!!!

    PinholeCameraIntrinsic camera;
    camera.SetIntrinsics(1920, 1080, 1076.24555000359, 1077.8051050571, 934.512032131513, 531.80880278762); // kinect .v2 - depth2color

    const Eigen::MatrixXd extrinsics = (Eigen::MatrixXd(4, 4) <<
        1.0, 0.0, 0.0, 0.0,
        0.0, -1.0, 0.0, 0.0,
        0.0, 0.0, -1.0, 0.0,
        0.0, 0.0, 0.0, 1.0).finished();

    auto pointcloud_ptr = CreatePointCloudFromRGBDImage(*rgbd_albedo, camera, extrinsics); // this works!!!

        DrawGeometries({ pointcloud_ptr }, "PointCloud from Depth Image", 1920, 1080);

    volume.Integrate(*rgbd, camera, extrinsics); // works fine!!!
    volume_albedo.Integrate(*rgbd_albedo, camera, extrinsics); // does not work!!!!

    //auto pcd = volume.ExtractPointCloud();
    auto mesh = volume.ExtractTriangleMesh();
    auto mesh_albedo = volume_albedo.ExtractTriangleMesh(); // does not work of course since volume from before was not created!!!!!
    //       auto voxel = volume.ExtractVoxelPointCloud();

    DrawGeometries({ pointcloud_ptr }, "PointCloud from Depth Image", 1920, 1080); // correctly the albedo values were assigned to the point cloud

    mesh->ComputeVertexNormals();
    mesh->ComputeTriangleNormals();

        /* Output matrices */
        MatrixXd v = Eigen::Map<Eigen::MatrixXd>(mesh->vertices_[0].data(), 3 ,mesh->vertices_.size()); // mesh vertices
        MatrixXi f = Eigen::Map<Eigen::MatrixXi>(mesh->triangles_[0].data(), 3 ,mesh->triangles_.size()); // mesh triangles
        MatrixXd v_c = Eigen::Map<Eigen::MatrixXd>(mesh->vertex_colors_[0].data(), 3 ,mesh->vertex_colors_.size()); // mesh vertices color
        MatrixXd v_albedo = Eigen::Map<Eigen::MatrixXd>(reinterpret_cast<double*>(pointcloud_ptr->colors_.data()), 3, pointcloud_ptr->colors_.size()); // point cloud vertices albedo value based on the pointcloud_ptr, not what expected finally
        MatrixXd v_n = Eigen::Map<Eigen::MatrixXd>(mesh->vertex_normals_[0].data(), 3 ,mesh->vertex_normals_.size()); // mesh vertices normals
        MatrixXd f_n = Eigen::Map<Eigen::MatrixXd>(mesh->triangle_normals_[0].data(), 3 ,mesh->triangle_normals_.size()); // mesh triangle normals

    //       WriteTriangleMesh("mesh_test1.ply",*mesh);

        cout << "Exiting." << endl;
        return 0;
}

So what I need is to assign the albedo_map_ptr values to the corresponding mesh->vertex_color_ or obtain both. data.zip

ttsesm commented 6 years ago

@syncle any update on the above?

syncle commented 6 years ago

Not yet. I will try today.

ttsesm commented 6 years ago

I see, sure :-). Looking forward.

syncle commented 6 years ago

volume_albedo.Integrate(*rgbd_albedo, camera, extrinsics); // does not work!!!!

Currently TSDF volume only accepts 3 channel 8bit unsigned char type color images + float type depth image. This is not reasonable as RGBDImage can be made from 32bit float image + 32bit depth image. I am extending the implementation of TSDF to allow this type too. I hope to submit a PR for this tonight.

ttsesm commented 6 years ago

thanks @syncle, @qianyizh.

guesspalm commented 5 years ago

@TheodoreT Hi, i'm facing some problem in aligning depth image to color image when using kinect v2. There are huge noisy points in my depth image. I saw your depth image is neat. Could you give me some advice please? Thanks.

ttsesm commented 5 years ago

@guesspalm could you provide some further details about how you are acquiring your images, meaning what library or software are you using (e.g. kinect sdk, libfreenect, matlab, etc). Usually all previous mentioned approaches provide out of the box a framework for getting the aligned images. Regarding the noise from my experience bilateral filtering for smoothing the surfaces worked quite nice in order to remove the bumps over the surfaces (this wavy effect that you usually get, these low and high peaks). Also playing with the first parameter of the volume function

ScalableTSDFVolume volume( / 512.0, ...);

seems to help on the final output of the mesh, by giving smaller or larger facets of the surface.