Open lexavtanke opened 7 months ago
Concatenation has processing_time_ms debug output. It takes rougthly 50 ms to concatenate 3 point clouds according to my observations on Autoware sample_rosbag
.
Here is the plot of the /sensing/lidar/concatenate_data_synchronizer/debug/processing_time_ms
topic.
Y axis is processing time and X axis is a rosbag timeline.
Now I'm working on integration published_time
to concatenation node to get more clear data as previous node (ring_outlier_filter
) already has it.
@tomas-pinto do you have any progress in this task?
@tomas-pinto do you have any progress in this task?
Unfortunately, I haven’t made much progress on the issue yet. However, I’m actively working on it and will provide an update by the end of this week.
Using the sample-rosbag with the logging simulator, I recorded the left, right and top lidars before_sync topics, as well as the concatenated/pointcloud topic. I also logged the concatenated topics in the concat_filter by writing the topic names and timestamps into a .txt file, and than matched the concatenated topics with the corresponding left, right and top lidar topics. This is the result I obtained:
(In the visualization, note that same-colored topics are concatenated, except for dark blue topics, which are discarded. Also topics that share the same timestamp with the concatenated/pointcloud have a small yellow hat.)
At the 4th and 11.5-second marks, I noticed that there are no published concatenated/pointcloud topics. However, when I check the logger, left, right and top lidar topics are concatenated, suggesting there might be a problem with the recording of the concatenated topics into this bag file, although I am not sure why.
Additionally, between the 8.5 and 9-second marks, there is a problem with the algorithm. When the second right lidar topic arrives, the concat_filter should publish the concatenation of the right and left lidar topics, then clear the buffer and save the right topic into buffer as described in the algorithms documentation. Instead, it discards the previous right lidar topic, takes the new right lidar topic into the buffer, waits for the top lidar topic, and then publishes the concatenated pointcloud. This problem is also mentioned in this issue.
cc: @vividf (I think you've working on similar stuff recently)
@amadeuszsz Right, several links from this page link to my previous issue. Once I create a PR, I will mention here so people from LeoDrive can test it again
@xmfcx The PR for a new design of the concatenate node is done (https://github.com/autowarefoundation/autoware.universe/pull/8300) Could you assign anyone to test for the previous issue? Thanks
@vividf Thank you for the work you put into fixing this. I think most people who worked on this issue are not affiliated with Autoware project anymore. Would it be possible for you to test it?
If we could get a before and after with the following graph:
Checklist
Description
There are some latency issues with point cloud concatenation in the
concatenate_data
filter of thepointcloud_preprocessor
. We need to investigate and measure them.We can use three types of time information for such purpose:
We need to replace the rosbag stamp with the published_time stamp.
https://github.com/AIT-Assistive-Autonomous-Systems/ros2bag_tools this tool could be used for it. By default, the restamp does: for all messages with headers, change the bag timestamp to their header stamp But since the all modified point clouds share the same origin header, we need to somehow: for published_time message types, use the published time stamps. Then it should be perfectly visible in the rqt_bag timeline. PublishedTime.msg
Purpose
Determine the actual timing patterns for the point cloud concatenation.
Possible approaches
Plan:
Definition of done
Measured timings of
concatenate_data
. Possible approach to speed up point cloud concatenation.