ouster-lidar / ouster-ros

Official ROS drivers for Ouster sensors (OS0, OS1, OS2, OSDome)
https://ouster.com
Other
125 stars 149 forks source link

Reduce point cloud size to correspond with azimuth window #145

Open outrider-jkoch opened 1 year ago

outrider-jkoch commented 1 year ago

Is your feature request related to a problem? Please describe. When the azimuth window is reduced from the default [0,36000] the lidar packets message consumes less data. However, a full 360 degree point cloud is still created which potentially consumes a large amount network and/or storage resources with unpopulated data.

Describe the solution you'd like Reduce the horizontal resolution in the published point cloud to align with the azimuth window specified in the sensor configuration.

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

Targeted Platform (please complete the following information only if applicable, otherwise dot N/A):

aabsz commented 1 year ago

Hi, I am dealing with this problem too. Do you have any solutions for not collecting the complete 360-degree point cloud data when limiting the azimuth window?

Samahu commented 1 year ago

@aabsz just to understand your problem is limiting the amount of data sent by the sensor when a narrow azimuth window is selected or your problem is limiting the amount of points in the published point cloud?

aabsz commented 1 year ago

@aabsz just to understand your problem is limiting the amount of data sent by the sensor when a narrow azimuth window is selected or your problem is limiting the amount of points in the published point cloud?

Thank you @Samahu for the reply. I would like to know how to limit the amount of the collected data in both cases, but now the first case is my problem. I limit the azimuth window to 90 degrees for example, but it still collects the same amount of data as when the azimuth is 360 degrees.

Samahu commented 1 year ago

This would require few changes to implement but in the meantime if you really need a workaround you can hack it as follows:

Go to the line: https://github.com/ouster-lidar/ouster-ros/blob/3f01e1d7001d8d21ac984566d17505b98905fa86/src/os_ros.cpp#L222

Perform the following changes:

    int window_start = 512;  // match to the actual start of your azimuth window
    int window_width = 256;  // for the 1024x10 mode, 256 represents 90 deg
    assert(window_start + window_width < ls.w); // ensure you don't go out of bounds
    cloud.resize(ls.h * window_width);
    for (auto u = 0; u < ls.h; u++) {
        for (auto v = window_start; v < window_start + window_width; v++) {
            const auto col_ts = timestamp[v];
            const auto ts = col_ts > scan_ts ? col_ts - scan_ts : 0UL;
            const auto idx = u * ls.w + v;
            const auto xyz = points.row(idx);
            const auto pt_idx = u * window_width + v; // note that store index is different 
            cloud.points[pt_idx] = ouster_ros::Point{
                {static_cast<float>(xyz(0)), static_cast<float>(xyz(1)),
                 static_cast<float>(xyz(2)), 1.0f},
                static_cast<float>(sg[idx]),
                static_cast<uint32_t>(ts),
                static_cast<uint16_t>(rf[idx]),
                static_cast<uint16_t>(u),
                static_cast<uint16_t>(nr[idx]),
                static_cast<uint32_t>(rg[idx]),
            };
        }
    }
outrider-jkoch commented 1 year ago

Are there plans to build this capability into the driver? Currently each sensor we have with a different azimuth window would either require a unique hack like above or to properly modify the driver to either use the metadata or a parameter to define the window.

Samahu commented 1 year ago

Yes, we do have plans to add support for this capability, can't give an ETA at the moment since I have other items with higher priorities than this one. But it is definitely on our TODO list.

mfloto commented 9 months ago

Hey @Samahu,

Thanks for the workaround. This has greatly reduced the time it takes to call pcl::fromROSMsg, which is critical for our use case. We have applied it to the current version of the scan_to_cloud_f_destaggered function:

void scan_to_cloud_f_destaggered(ouster_ros::Cloud<PointT>& cloud,
                                 PointS& staging_point,
                                 const ouster::PointsF& points,
                                 uint64_t scan_ts, const ouster::LidarScan& ls,
                                 const ouster::sensor::sensor_info& info,
                                 const std::vector<int>& pixel_shift_by_row) {
    auto ls_tuple = make_lidar_scan_tuple<0, N, PROFILE>(ls);
    auto timestamp = ls.timestamp();

    int window_start = 654;  // match to the actual start of your azimuth window
    int window_width = 740;  // for the 2048x10 mode, 740 represents 130 deg
    assert(window_start + window_width < ls.w); // ensure you don't go out of bounds
    cloud.resize(ls.h * window_width);

    for (auto u = 0; u < ls.h; u++) {
        for (auto v = window_start; v < window_start + window_width; v++) {
            const auto v_shift = (v + ls.w - pixel_shift_by_row[u]) % ls.w;
            auto ts = timestamp[v_shift];
            ts = ts > scan_ts ? ts - scan_ts : 0UL;
            const auto src_idx = u * ls.w + v_shift;
            const auto updated_idx = u * window_width + v - window_start; // adjusted index
            const auto xyz = points.row(src_idx);
            // if target point and staging point has matching type bind the
            // target directly and avoid performing transform_point at the end
            auto& pt = CondBinaryBind<std::is_same_v<PointT, PointS>>::run(
                cloud.points[updated_idx], staging_point);
            // all native point types have x, y, z, t and ring values
            pt.x = static_cast<decltype(pt.x)>(xyz(0));
            pt.y = static_cast<decltype(pt.y)>(xyz(1));
            pt.z = static_cast<decltype(pt.z)>(xyz(2));
            // TODO: in the future we could probably skip copying t and ring
            // values if knowning before hand that the target point cloud does
            // not have a field to hold the timestamp or a ring for example the
            // case of pcl::PointXYZ or pcl::PointXYZI.
            pt.t = static_cast<uint32_t>(ts);
            pt.ring = static_cast<uint16_t>(u);
            copy_lidar_scan_fields_to_point<0>(pt, ls_tuple, src_idx);
            // only perform point transform operation when PointT, and PointS
            // don't match
            CondBinaryOp<!std::is_same_v<PointT, PointS>>::run(
                cloud.points[updated_idx], staging_point,
                [](auto& tgt_pt, const auto& src_pt) {
                    point::transform(tgt_pt, src_pt);
                });
        }
    }
}

We also tried to pass a sensor::sensor_info object to the function to avoid hardcoding our azimuth window, but discovered that this would mean changing the ScanToCloudFn signature, resulting in too many changes throughout the wrapper.

Do you know of an easier way to get the current sensor::sensor_info or is there an ETA for the official implementation?

Edit: The updated_idx had to be adjusted to start at 0. Otherwise normal indexing does work sufficiently.

Samahu commented 9 months ago

@mfloto thanks for providing the updated example. With regard to the second question unfortunately I don't see a good way around not having to pass the sensor_info to the scan_to_cloud_f_destaggered and update all the places in the code. But the problem as you said this would change the code substantially from the current master which could may make it a bit difficult to merge in any patch updates to the driver in the future. There is no guarantee that the updated signature would be the same. For example, if I were to pass in sensor_info to this function I would probably drop the pixel_shift_by_raw param since it is included sensor_info. An alternative way that would avoid going through the viral change is to create a singleton that serves the sensor_info - or sensor_info(s) if you had more than one sensor - or provide a static method as part of the os_cloud_node and the os_driver_node since they create this structure. Of course this isn't a great solution and could have problems in case you for runtime node restart but it would keep the changes minimal, imo. I don't have an ETA of when I could provide update to this feature but it is definitely a very viable feature that we'd like to support some time in the future.

mfloto commented 9 months ago

Thank you for your quick responds. Then we will keep the workaround in place for now.

mfloto commented 6 months ago

Hey @Samahu, we have two follow-up questions:

We have found multiple places to manipulate the massege, but would like to know the "correct" way to do it.

Thanks in advance.

Samahu commented 6 months ago

We want to have the azimuth window's dimensions set as width and hight of the point cloud message published, but cannot find where these values are initially set. Through resizing with cloud.resize(ls.h * window_width); the new dimensions are 1 x n.

The point cloud is initialized in the PointCloudProcessor object: https://github.com/ouster-lidar/ouster-ros/blob/5a08f89e21dabfb78ae359f9bfc4ba869a275104/src/point_cloud_processor.h#L45

Resizing will change the point cloud to un-organized point cloud, that's why you get the 1xn.

We would like to synchronize a different sensor on the point in time where the LiDAR scan is at the beginning of the azimuth window with an ApproximateTimeSynchronizer. Therefore we would need to adjust the timestamp of the message to account for that offset

The point cloud message timestamp is assigned in the PointCloudProcessor::process method so you can handle adjusting the timestamp there.

mfloto commented 5 months ago

Thanks for your help. This worked.

A small note: We noticed, that we had to adjust the point index. The code in the comment has been changed accordingly.