Closed wohe closed 4 years ago
Multiple reasons:
Introduction of [Timed]RangefinderPoint was really important to my use case, because I needed to pass a couple of other vertex attributes like intensity and laser ID, and have them survive throughout the entire pipeline out of which we get the scan matched point cloud with those attributes preserved. Sure, pay what you use should apply, I am wondering if we can reconcile these two.
Side note performance-wise, off-topic: I have been battling problems for a long time with very slow performance on Windows due to the code base being extremely allocation heavy. Make unique/shared calls everywhere, STL containers using std::allocator, std::function-based queues, and lots of interim results (like voxel filtered clouds) being created as local variables passed by value, and the constant trajectory data consisting of lots of small allocations (this caused the quit operation with a trajectory consisting of 10 000s of nodes to take up to half a minute). On Linux, my feeling is that glibc is extremely forgiving about this because I think it gives out virtual memory very freely, essentially overcommiting, and then leaves it to the OS to page in/out the used memory. If you hammer the system allocator, especially if there are multiple subsystems/threads doing so, on Windows, you can get punished badly (e.g. performance falls from 2x to 0.2x realtime) because there is no overcommiting, and the allocator spends a lot of time coalescing the heap and doing other housework. IIRC, performance cost of adding a couple of point attributes to the input cloud -> scan matched cloud data path was negligible. I would be interested to see some tests.
Adds a new field intensities to TimedPointCloudData. RangeDataCollator now also takes intensities into account. AddRangeData now takes a point cloud by value instead of const reference as we would later make a copy of it anyway.
Signed-off-by: Wolfgang Hess whess@lyft.com