MIT-SPARK / TEASER-plusplus

A fast and robust point cloud registration library
MIT License
1.79k stars 342 forks source link

Question about lidar odometry application and bad alloc. #49

Closed femust closed 4 years ago

femust commented 4 years ago

Hey,

so from some time I have been trying to run your Teaser as a Lidar Odometry module for comparison for my research. One Teaser Lidar Odometry approach is in a point-to-point manner - I have 3D correspondences and then I put it into Teaser; the other is feature-based approach - I calculate FPFH features and then I put this into Teaser. I use Kitti dataset and I noticed that when I try to find the transformation from frame to frame if I don't downsample the point cloud I always get bad alloc error (one point cloud has ~120k points), If I do voxel downsampling then either I get the result but I have to wait a very long time - more than 2-3 seconds (in feature-based even more) or I could downsample even more then I have processing frame around 1 second but the results are very not correct. So I am curious about the following things:

  1. is it possible to integrate somehow Teaser in Lidar Odometry manner, have you tried that one?
  2. this bad alloc is due to Eigen, is it possible that the problem is so big/large that my pc doesn't have enough memory to solve the frame to frame transformation problem?
jingnanshi commented 4 years ago

@femust Thank you for your interest.

One Teaser Lidar Odometry approach is in a point-to-point manner - I have 3D correspondences and then I put it into Teaser; the other is feature-based approach - I calculate FPFH features and then I put this into Teaser.

The first approach you described here is not correct. Given two consecutive LIDAR scans, you can't get correspondences without using some sort of matching algorithm. You can either use 1) nearest-neighbor directly to match points, or 2) use features + NN search in feature space.

I use Kitti dataset and I noticed that when I try to find the transformation from frame to frame if I don't downsample the point cloud I always get bad alloc error (one point cloud has ~120k points)

Yes, 120k points are a lot of points. If the number of correspondences you feed into TEASER++ is also 120k points, then it will definitely have bad_alloc errors.

is it possible to integrate somehow Teaser in Lidar Odometry manner, have you tried that one?

Yes, it's possible. I have not personally tried it, but I know people who have tried it on Kitti with FCGF.

this bad alloc is due to Eigen, is it possible that the problem is so big/large that my pc doesn't have enough memory to solve the frame to frame transformation problem?

Yes.

Some more suggestions:

  1. Use learned feature descriptors like FCGF instead of FPFH.
  2. If you can't get it to run fast enough, try to use TEASER++ for loop closure detection, or run it every k frames, where k > 1.
femust commented 4 years ago
The first approach you described here is not correct. Given two consecutive LIDAR scans, you can't get correspondences without using some sort of matching algorithm. You can either use 1) nearest-neighbor directly to match points, or 2) use features + NN search in feature space.

Yes, this is what I meant, but it seems that I didn't express myself clearly, so first I have a kind of knn matcher and then I put it into a teaser.

Some more suggestions:

Use learned feature descriptors like FCGF instead of FPFH.
If you can't get it to run fast enough, try to use TEASER++ for loop closure detection, or run it every k frames, where k > 1.

Ok, thank you for your suggestions!

jingnanshi commented 4 years ago

No problem, @femust let me know if you have more questions.