Open drwnz opened 7 months ago
In my experimentation, sending 1 PandarScanMsg
for each single PandarPacket
reduced the latency between the final packet of a scan arriving in the HW interface, and the decoder wrapper publishing the pointcloud, from ≈9.0 ms
to ≈1.4 ms
for AT128 with around 70k
output points per pointcloud.
At least for Hesai, the decoder and decoder wrapper are already agnostic to number of packets.
I recommend getting rid of ScanMsg altogether. Maybe it can be published additionally, just for specific logging purposes. But it shouldn't be a part of the runtime processing pipeline.
I agree, this would additionally allow us to send smaller packets without padding (currently, PandarPacket
etc. are always MTU_SIZE
in length, even if the packet itself is smaller).
Using the scan message for logging would require us to implement a mechanism to replay the scan contents in a timing-accurate manner, i.e. we would additionally need to store packet timestamps (the ones in the packets themselves could be used but are vendor-specific and thus a pain to parse in the HW interface).
(also see discussion autowarefoundation#4024)
I'm currently working on this along with refactoring Nebula towards being a single node:
Description
Currently, the scan message containing raw UDP packets is sized to be the same as the number of packets in the scan. Together with adding
nebula_messages
that has a generic udppacket
message andpackets
message (currently "scan" message), enabling an arbitrary number of packets perpackets
message would allow tuning to optimize throughput.Purpose
Details
The reasoning behind this change is as follows:
Possible approaches
nebula_messages
withnebula_packet
andnebula_packets
, which allow any number of packets (but keep track of the number of packets contained in the message with a field)