cartographer-project / cartographer

Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.
Apache License 2.0
7.15k stars 2.25k forks source link

Add intensity data to TimedPointCloudData. #1742

Closed wohe closed 4 years ago

wohe commented 4 years ago

Adds a new field intensities to TimedPointCloudData. RangeDataCollator now also takes intensities into account. AddRangeData now takes a point cloud by value instead of const reference as we would later make a copy of it anyway.

Signed-off-by: Wolfgang Hess whess@lyft.com

wohe commented 4 years ago

Multiple reasons:

  1. This is intended to extend the existing public interface.
  2. It needs to be efficient when intensities are not used. Your suggestion would add something to every point.
  3. This is the code which has seen use. We should probably postpone major changes to later PRs as they would need careful consideration (measurements!) of the performance impact.
ojura commented 4 years ago

Introduction of [Timed]RangefinderPoint was really important to my use case, because I needed to pass a couple of other vertex attributes like intensity and laser ID, and have them survive throughout the entire pipeline out of which we get the scan matched point cloud with those attributes preserved. Sure, pay what you use should apply, I am wondering if we can reconcile these two.

Side note performance-wise, off-topic: I have been battling problems for a long time with very slow performance on Windows due to the code base being extremely allocation heavy. Make unique/shared calls everywhere, STL containers using std::allocator, std::function-based queues, and lots of interim results (like voxel filtered clouds) being created as local variables passed by value, and the constant trajectory data consisting of lots of small allocations (this caused the quit operation with a trajectory consisting of 10 000s of nodes to take up to half a minute). On Linux, my feeling is that glibc is extremely forgiving about this because I think it gives out virtual memory very freely, essentially overcommiting, and then leaves it to the OS to page in/out the used memory. If you hammer the system allocator, especially if there are multiple subsystems/threads doing so, on Windows, you can get punished badly (e.g. performance falls from 2x to 0.2x realtime) because there is no overcommiting, and the allocator spends a lot of time coalescing the heap and doing other housework. IIRC, performance cost of adding a couple of point attributes to the input cloud -> scan matched cloud data path was negligible. I would be interested to see some tests.