ros-drivers / velodyne

ROS support for Velodyne 3D LIDARs
http://ros.org/wiki/velodyne
Other
646 stars 643 forks source link

Use sensor_qos profile #475

Closed alluring-mushroom closed 1 year ago

alluring-mushroom commented 2 years ago

Is your feature request related to a problem? Please describe. We have a system with a many VLP16 (8) and other sources of high bandwidth data, and so we are locking down our QOS policies. I have noticed that the velodyne driver and pointcloud converter use essentially the default QOS policy. Specifically, it uses a reliable policy, instead of a best effort.

https://github.com/ros-drivers/velodyne/blob/37a2b71d7f2111616931339c850fb5bb23b96313/velodyne_driver/src/driver/driver.cpp#L174 https://github.com/ros-drivers/velodyne/blob/37a2b71d7f2111616931339c850fb5bb23b96313/velodyne_pointcloud/src/conversions/convert.cpp#L124 https://github.com/ros-drivers/velodyne/blob/37a2b71d7f2111616931339c850fb5bb23b96313/velodyne_pointcloud/src/conversions/convert.cpp#L129

Because we have so many lidar producing so much data, flowing, using a best_effort policy makes sense. Additionally, lidar are sensors, and I believe it is true in general that grabbing every single velodyne_packet is not a common need.

Describe the solution you'd like I think this should be changed to a best_effort policy, most likely the default rmw_qos_profile_sensor_data, as this profile was designed for use in sensors. E.g. velodyne_publisher would look like this:

#include "rclcpp/qos.hpp"
output_ = this->create_publisher<velodyne_msgs::msg::VelodyneScan>("velodyne_packets", rmw_qos_profile_sensor_data);

Describe alternatives you've considered I am very keen to hear the reasons for keeping this with the default QOS, as such reasoning will be helpful in informing decisions about our own architecture.

JWhitleyWork commented 1 year ago

@alluring-mushroom Would you be willing to make a Pull Request to the ros2 branch?

alluring-mushroom commented 1 year ago

@JWhitleyWork, I can potentially in the future. I opened a similar issue on on the SICK repo, and they advised that the impact of this is minimal, as subscribing with best-effort results in best-effort:

you can always change the QoS on the receiving side to best_effort, which will lead to best-effort data handling, as the chosen policy is negotiated between publisher and subscriber.

I did not realise this when posting this. I still believe being able to set the node to be best-effort has merit, to ensure the data rate is as good as possible though I do not have the bandwidth currently (which is probably fairly apparent).

JWhitleyWork commented 1 year ago

Thanks for responding. As you can probably tell from my very spaced-out replies, I don't currently have the bandwidth to address this either. Therefore, I'm going to close this issue until a PR is put in. Please re-open or create a new one if there are other issues.