tier4 / nebula

A universal LiDAR and radar driver for ROS 2, supporting Hesai, Velodyne, Robosense and Continental sensors.
https://tier4.github.io/nebula/
Apache License 2.0
49 stars 50 forks source link

Lidar-agnostic scan message type #114

Closed peci1 closed 5 months ago

peci1 commented 9 months ago

Hi, I see you're building a universal lidar driver. As such, I think it would benefit from a unified intermediate representation similar to what LaserScan is for 2D lidars. Such representation carries more information than a point cloud (e.g. missed rays, ray directions...) while being more compact. And there is a straight way how to convert it to PointCloud2.

A few years ago I tried to kick it off, but it didn't work out. Would you like to try to revive it again?

The work I've done is described here: https://github.com/ros/common_msgs/issues/150

One of the reasons why it hasn't been considered for merging into common_msgs is that there was no implementation in the wild using the proposed messages. So it would even help if you just created some similar messages in your repo...

knzo25 commented 9 months ago

@peci1

Hi, thanks for bringing this to us. The idea looks interesting, and it would also help us get traction with this project. From my side, I am worried about using intermediate representations since currently we prioritize reducing latency the most, and (our) downstream tasks all use PointCloud2.

The bulk of nebula architecture discussions happens in our work group meetings, so if you are interested, it would be nice if you could bring this topic to the next's meeting. For example this is the link to the last meeting: https://github.com/orgs/autowarefoundation/discussions/4116

peci1 commented 9 months ago

Thanks for your answer. If you're after latency, then the intermediate representation could really be against your goals and there's not much to do about it.

I see at least 2 potential advantages it could have even for Autoware:

  1. Representing even the missed rays, it can be used for better clearing of occupancy maps. There might be nonstandard ways to represent this even in PointCloud2, though (like publishing a point with NaN XYZ and adding viewpoint, direction and time offset fields).
  2. Datasets and logs with less preprocessed data. PointCloud2 is usually the result of quite some processing logic that might have parameters or choices. The intermediate representation could remove this additional processing and allow storing almost raw data, while being a standardized, vendor-agnostic format.

I still wonder why 2D lidars settled on LaserScan messages so easily, while 3D lidars have never even tried having something similar. I think the 2D case clearly shows that sometimes it is worth sacrificing a bit of performance to gain a standardized approach. I've never worked with any binary data stream from a 2D lidar, all of them directly offer LaserScan messages and nobody even thinks of providing a ROS driver that would directly output PointCloud2 or some other format...

I don't think I'll attend the meeting as I'm quite busy these days. However, I'd be glad if you could spend a word or two about this on the meeting, leaving it to the individual attendees whether they're interested in continuing this discussion. If there'd be some people really interested in moving in this direction, I'd be glad to connect with them.