peci1 / robot_body_filter

Filters the robot's body out of point clouds and laser scans.
BSD 3-Clause "New" or "Revised" License
84 stars 21 forks source link
filter laser pointcloud ros scan

robot_body_filter

Filters the robot's body out of point clouds and laser scans.

Tutorial

Check out the webinar recording where a lot of options for this filter are explained and demonstrated! https://www.youtube.com/watch?v=j0ljV0uZy3Q

Changes vs PR2/robot_self_filter:

Build Status

Development versions: Github Actions Dev melodic Dev noetic

Release jobs Melodic Melodic version: Bin melodic-amd64 Bin melodic-arm64 Bin melodic-armhf

Release jobs Noetic Noetic version: Bin noetic focal-amd64 Bin noetic focal-arm64 Bin noetic focal-armhf

Basic Operation

filters::FilterBase API

The basic workings of this filter are done via the filters::FilterBase API implemented for sensor_msgs::LaserScan and sensor_msgs::PointCloud2 types. This means you can load this filter into a FilterChain along other filters as usual. Different from the standard filters, this one can also publish several interesting topics and subscribes to TF.

General overview

This filter reads robot model and the filter config, subscribes to TF, waits for data (laserscans or point clouds) and then cleans them from various artifacts (this is called data filtering).

It can perform 3 kinds of data filters: clip the data based on the provided sensor limits (parameter filter/do_clipping), remove points that are inside or on the surface of the robot body (parameter filter/do_contains_test) and remove points that are seen through a part of the robot body (parameter filter/do_shadow_test). These kinds of tests are further referenced as "clipping", "contains test" and "shadow test".

If working with point clouds, the filter automatically recognizes whether it works with organized or non-organized clouds. In organized clouds, it marks the filtered-out points as NaN. In non-organized clouds, it removes the filtered-out points. In laserscans, removal is not an option, so the filtered-out points are marked with NaN (some guides suggest that max_range + 1 should be used for marking invalid points, but this filter uses NaN as a safer option).

Performance tips

In general, the filter will be computationally expensive (clipping is fast, contains test is medium CPU intensive and shadow test is the most expensive part, because it basically performs raytracing). You can limit the required CPU power by limiting the filter only to parts that matter. E.g. if the robot has a link that can never be seen by the sensor, put it in the list of ignored links. The less links are processed, the better performance. If you're only interested in removing a few links, consider using the only_links parameter.

To speed up shadow filtering, you can set filter/max_shadow_distance, which limits the number of points considered for shadow tests just to points close to the sensor. Setting this to e.g. three times the diameter of the robot should remove all of the shadow points caused by refraction by a part of the robot body. But you have to test this with real data.

Performance also strongly depends on representation of the robot model. The filter reads <collision> tags from the robot URDF. You can use boxes, spheres and cylinders (which are fast to process), or you can use convex meshes (these are much worse performance-wise). If you pass a non-convex mesh, its convex hull will be used for the tests. Don't forget that each link can have multiple <collision> tags. If you do not have time to convert your meshes to the basic shapes, try to at least reduce the number of triangles in your meshes. You can use your high-quality meshes in <visual> tags. To simplify your model to primitive shapes, you can either manually edit the URDF, or you can utilize ColliderGen.

Model inflation

You can utilize the builtin model inflation mechanism to slightly alter the size of the model. You will probably want to add a bit "margin" to the contains and shadow tests so that points that are millimeters outside the robot body will anyways get removed. You can set a default scale and padding which are used for all collisions. Different inflation can be used for contains tests and for shadow tests. Inflation can also be specified differently for each link. Look at the body_model/inflation/* parameters for details.

Scaling means multiplying the shape dimensions by the given factor (with its center staying in the same place). Padding means adding the specified metric distance to each "dimension" of the shape. Padding a sphere by p just adds p to its radius; padding a cylinder adds p to its radius and 2p to its length (you pad both the top and the bottom of the cylinder); padding a box adds 2p to all its extents (again, you pad both of the opposing sides); padding a mesh pads each vertex of its convex hull by p along the direction from the mesh center to the vertex (mesh center is the center specified in the mesh file, e.g. DAE). Have a good look at the effects of mesh padding, as the results can be non-intuitive.

Data acquisition modes

The filter supports data captured in two modes - all at once (e.g. RGBD cameras), or each point at a different time instant (mostly lidars). This is handled by sensor/point_by_point setting. Each mode supports both laser scans and point cloud input (although all-at-once laserscans aren't that common). Point-by-point pointclouds are e.g. the output of 3D lidars like Ouster (where more points can have the same timestamp, but not all). If you want to use the point-by-point mode with pointclouds, make sure they contain not only the usual x, y and z fields, but also a float32 field stamps (with time difference from the time in the header) and float32 fields vp_x, vp_y and vp_z which contain the viewpoint (position of the sensor in filtering frame) from which the robot saw that point.

When filtering in the point-by-point mode, the robot posture has to be updated several times during processing a single scan (to reflect the motion the robot has performed during acquiring the scan). The frequency of these updates can also have a significant impact on performance. Use parameter filter/model_pose_update_interval to set the interval for which the robot is considered stationary. The positions of the robot at the beginning and at the end of the scan are queried from TF. The intermediate positions are linearly interpolated between these two positions.

TF frames

The filter recognizes four logical TF frames (some of which may be the same physical frames).

Fixed frame is a frame that doesn't change within the duration of the processed scan. This means for all-at-once scans, this frame is not needed because the duration of the scan is zero. For point-by-point scans, it depends on the particular scenario. In static installations (like manipulators with sensors not attached to them), it can be the sensor frame. For stationary robots with the sensor attached to a movable part of their body, base_link will be a good choice. For completely mobile robots, you will need an external frame, e.g. odom or map (beware of cyclic dependencies - if the map is built from the filtered scans, you obviously cannot use map as the fixed frame for filtering the scans...).

Sensor frame is the frame in which the data were captured. Generally, it would be whatever is in header.frame_id field of the processed messages. You can use the filter for data from multiple sensors - in that case, you can leave the sensor frame unfilled and each message will be processed in the frame it has in its header.

Filtering frame is the frame in which the data filtering is done. For point-by-point scans, it has to be a fixed frame (possibly different from the fixed frame set in frames/fixed). For pointcloud scans, it should be the sensor frame (if all data are coming from a single sensor), or any other frame. It is also used as the frame in which all debugging outputs are published.

Output frame can only be used with pointcloud scans, and allows to transform the filtered pointcloud to a different frame before being published. It is just a convenience which can save you launching a transformation nodelet. By default, filtered pointclouds are output in the filtering frame.

Bounding shapes

As a byproduct, the filter can also compute various bounding shapes of the robot model. There are actually four robot models - one for contains test, one for shadow test, one for bounding sphere computation and one for bounding box (these models can differ by inflation and considered links). All bounding shapes are published in the filtering frame. For point-by-point scans, the bounding shapes correspond to the time instant specified in the header of the processed scan. The computation of bounding shapes is off by default, but enabling it is cheap (performance-wise).

The bounding sphere is easy - the smallest sphere that contains the whole collision model for bounding sphere computation (with the specified exclusions removed).

The bounding box is the smallest axis-aligned bounding box aligned to the filtering frame. It is built from the model for bounding box computation.

The local bounding box is the smallest axis-aligned bounding box aligned to the frame specified in local_bounding_box/frame_id. It is especially useful with mobile robots when the desired frame is base_link. It is built from the model for bounding box computation.

The oriented bounding box should be the smallest box containing the collision model. However, its computation is very bad conditioned, so the results can be very unsatisfactory. Currently, the oriented bounding box of each of the basic collision shapes is "tight", but merging the boxes is not optimal. A good algorithm would probably require costly and advanced iterative methods. The current implementation uses FCL in the background and merges the boxes using fcl::OBB::operator+=() without any further optimizations. It is built from the model for bounding box computation.

The filter also supports publishing auxiliary pointclouds which "cut out" each of these bounding shapes. These are the input data converted to pointcloud in filtering frame from which all points belonging to the bounding shape are removed. Please note that the "base" used for cutting out is the input pointcloud, not the filtered one.

First setup/debugging

The filter offers plenty of debugging outputs to make sure it does exactly what you want it to do. All the options are described in the last part of this page. Generally, you should look at the pointclouds visualizing which points got filtered out and you should also check the robot models used for filtering.

Also, have a look in the [examples] folder to get some inspiration.

Usage

This is a standard filters::FilterBase<T>-based filter which implements the configure() and update(const T&, T&) methods. This means it can be loaded e.g. as a part of a filter chain via the laser_filters package or the relatively new sensor_filters. This means the input and output data are not supplied in the form of topics, but they are instead passed to the update() method via the C++ API.

This filter is a bit unusual - it subscribes and publishes several topics. Normally, filters only operate via the update() method and are not expected to publish anything. This filter is different, it requires a working ROS node handle, and can publish a lot of auxiliary or debug data.

Subscribed Topics

Published Topics

Provided Services

Filter Parameters

Have a look in the examples folder to get inspiration for configuration of your filter.

Debug Operation

These options are there to help correctly set up and debug the filter operation and should be turned off in production environments since they can degrade performance of the filter.

Published Topics

Filter Parameters