luxonis / depthai-ros

Official ROS Driver for DepthAI Sensors.
MIT License
251 stars 185 forks source link

Reduce latency ROS #541

Open abegni opened 3 months ago

abegni commented 3 months ago

Hi, I'm quite new with OAK cameras. I'm using a OAK-D Pro Wide PoE. I need to know some info in order to set and manage the camera and stereo parameters directly from ROS1 param. I read the guide https://docs.luxonis.com/software/ros/depthai-ros/driver/. At the end of it there is the List of parameters and i didn't understand how to manage them in a easy way. I would like to set them in order to reduce latency also at the expense of quality of stereo image. I need to use only RBG, Pointcloud and Depthimage output. So I'm using rgbd_pcl.launch. With the default parameters the latency is around 1 second. Any suggest will be really appreciate. Thank you

Serafadam commented 3 months ago

Hi, for latency improvements you can also refer to this doc page, ROS parameters/logic of normal DAI nodes, usually setting low_bandwidth_mode and reducing image size helps.

abegni commented 3 months ago

As you suggested, I tried to use low_bandwidth_mode. The latency is definitely digressed and also depth image's quality. It's not so clear how exactly low_bandwidth_mode works, which parameters changes.

abegni commented 3 months ago

... It is still complicated how to understand which package (between depthai_ros_driver and depthai_examples) is the best in order to start the cameras with both RGB and Depth output images.

My setup:

With reference to the Setup outlined above, I tried to use camera.launch (depthai_ros_driver package) with the following parameters (because otherwise it fills the bandwidth):

However, the output is still far from being acceptable (low quality, big latency). Additionally, I tried to use the depthai_example package; more precisely the stereo_nodelet.launch sample launch file. It seems to me that with stereo_nodelet.launch performs well, but it doesn't contain the RGB stream which i also would like to have.

Moreover, from what I observed the codes in C++ and the launch files included in both the depthai_ros_driver package and the depthai_examples are completely different from each other. At this point it is completely unclear which is the best way to proceed to set up the cameras using ROS, and at the same time achieving acceptable performances.

Thank you for any help

Serafadam commented 3 months ago

Hi, could you check your network settings? With these parameters latency should be negligible so I suspect the issue might be on the network side. I think it would be the best to start witch checking if your setup supports Jumbo frames

abegni commented 3 months ago

This is my current setup:

In Ubunut netaplan: network: renderer: NetworkManager version: 2 ethernets: enp7s0: dhcp4: no dhcp6: no addresses:

In camera.cpp I modified the code in "void Camera::getDeviceType()" method as follows:

    /*Activate jumbo frame for camera device*/
    dai::BoardConfig board;
    board.network.mtu = 9000;
    board.network.xlinkTcpNoDelay = false;
    board.sysctl.push_back("net.inet.tcp.delayed_ack=1");
    /****************************************/

    pipeline = std::make_shared<dai::Pipeline>();

    /****************************************/
    pipeline -> setBoardConfig(board);
    /****************************************/

Is there any way to know if the jumbo frame are used with this configuration?

Serafadam commented 3 months ago

Hi, here you can find some additional information on testing jumbo frames. Additionally, I'm not sure if you already checked it but here is some additional documentation on PoE cameras latency optimization.