Closed JiaruiWang-Jill closed 6 years ago
Besides, sorry to bother again. The completed point cloud generated by this project is like this. Please help.
Hi @JillWangJill ,
Glad this might be useful to you!
So the main difference between the pointcloud and the mesh that get dumped out, is the pointcloud is simply verifying the transforms -- aligning all the input clouds together and making sure they work. Whereas the mesh is using projective geometry (mapping depth data to pixels and raytracing). If you're seeing a valid pointcloud but an almost empty mesh, that probably means one of the assumptions of the projective step is wrong.
Here are a few examples:
As far as the left and right clouds, that alone isn't a problem -- your clouds are saved in ASCII format (human readable), which is a waste of filesize but shouldn't be any more difficult for the program to read. If you have sample data I'm happy to have a look.
Another important point: is there a way to get your system to save organized point clouds instead? At the moment you're saving unorganized ones (1x21777 rather than 640x480), which means my executable is needing to reproject points itself. That could be losing a lot of data, especially if the focal length is somehow off. It's much better to have the kinect save organized clouds from the get go, and add the --organized flag to the integrate script.
Dear @sdmiller , thank you so much for your kind reply!
As for the three tips in your first comment, I checked and got the following result. 1. I am sorry but I dont know how to check whether data is in reference frame :( Please kindly help. 2. Intrinsics should be correct, when collecting pcd files, I am using Kinect v1, with DepthMapFactor assigned to 5000. 3. I guess the transforms is in right coordinate.
As for the organized point cloud problem, I managed to do it! Before, I directly ignored point in depth.png if this point is without depth value and did not add it into pcd file, which results in my point cloud size < 640*480. This time I assign them( ones without depth value) to be 0 and push those points into pcd file as well.
Also I changed my source files into another set of data cause I guess the last one I use might have problem in depth. (Most Z values are 257, which result in mesh dumped out. Anyway :-) )
The new file data works but result doesnt look good. It is like this.
And pcd files are organized.
I guess pose is not right. But I directly save the pose from orbslam2, and the pose is Transform Matrix 4*4 camera to world. I uploaded my file into google drive link. Would you mind helping me take a look at it at your convenience? Sincere thanks in advance!! https://drive.google.com/drive/folders/1xWHgGWzQuxoCuX8EV7TlsPRtwt89IxS7?usp=sharing
Thank you @sdmiller ! The problem was because I use the inversed pose.
But I got another tedious problem. Could you please tell me how to modify the depth factor? Cause my kinect v1 use 1000 as depth factor. If I test by myself to generate overall point cloud, with cx,cy,fx,fy specified, depth factor = 1000, I can get the accurate point cloud. But if the depth factor = 5000, the point cloud would fail.
Thanks in advance and this is really a brilliant fusion project!
Hi @JillWangJill ,
Glad it worked out! Apologies for being away this weekend and unable to respond.
If I'm understanding right, I believe the "depth factor" is the unit of the Z value? And the depth factor = 1000 means that Z is an unsigned short which represents "millimeters from sensor". So I'm unsure how this would be a tunable parameter to, say, 5000 -- that would inflate all distances by a factor of 5x, which should no longer be metrically meaningful for your SLAM system.
My integrate script does offer an input "--cloud-units" and "--pose-units" parameter which allows you to specify "Units of the data, in meters" and "Units of the poses, in meters", respectively. This is just in case someone wants to use clouds in, say, millimeters rather than meters.
In the CPUTSDF code itself, it assumes that the X, Y, and Z of the cloud are in the same units. It further "assumes" that those units are meters, though is is a pretty weak assumption and only matters for sensor-specific things (max-sensor-dist,min-sensor-dist,trunc-dist,etc).
Regarding your positioning of missing data: the PCL "correct" way of handling missing depth data would be to make a point with an X, Y, and Z value of std::numeric_limits
Note, for example, this line in my "integrate" script, when I create an organized cloud:
for (size_t j = 0; j < cloud_organized->size (); j++)
cloud_organized->at (j).z = std::numeric_limits<float>::quiet_NaN ();
Thank you so much! I modified the oringinal pcd files to ensure depth factor problem. It worked out! Thanks for providing such a marvelous code.
Hi, thank you so much for providing this fusion version.
I met with problem in testing data from my Kinect v1 and orbslam2. I collected key frames' pose as well as pcd. But the mesh.ply seems not work very well for me.
Take an example, the following is the completed point cloud, and the mesh should be almost same as this one.
However, my mesh.ply look like this. Only a few points.
I fail to find out where the problem is. But I found that my pcd files are different from the sample database. Left is sample database's pcd file( opened in sublime). Right is mine. Is this a problem?
Sorry to bother but I am really desperate in dealing with it.