peci1 / robot_body_filter

Filters the robot's body out of point clouds and laser scans.
BSD 3-Clause "New" or "Revised" License
82 stars 21 forks source link

TF watchdog for sensor frame breaks filter #6

Closed Martin-Oehler closed 4 years ago

Martin-Oehler commented 4 years ago

Hello, the commit ee0c87fe714bf398cafbe6ae2deafd1bc5ac1a49 introduced a reachability check for the sensor frame. However, the sensor frame is not monitored by the TF watch dog. Therefore, the check always fails and every scan is rejected.

Best regards, Martin

peci1 commented 4 years ago

Thanks for the report, Martin. So does that mean your sensor frame is not a part of the robot model? Otherwise, it'd be added automatically.

But I agree the sensor frame should probably be always monitored by the watchdog, so I'll prepare a branch where you can test the fix.

Martin-Oehler commented 4 years ago

The sensor frame is part of the model in URDF and TF. But as far as I understand the code, only frames that are filtered are also monitored, which does not include the sensor frame.

peci1 commented 4 years ago

By default, frames of all links with a collision shape are monitored. So as long as the sensor has a collision shape and you don't specifically ignore it, it should be monitored. Nevertheless, I'm working on a fix that not only specifically adds the sensor frame to monitored frames, but also automatically starts monitoring frames for which there was any query (just for the case when the links are not a part of the model).

peci1 commented 4 years ago

You can try the fix in branch monitor_sensor_frame .

peci1 commented 4 years ago

You can also try with #8. It contains the fix from monitor_sensor_frame branch, and includes many more bugfixes I did recently.

Martin-Oehler commented 4 years ago

By default, frames of all links with a collision shape are monitored. So as long as the sensor has a collision shape and you don't specifically ignore it, it should be monitored.

Our sensor frame does not have a collision shape, because it is rotating and we transform the data to a fixed frame first.

You can try the fix in branch monitor_sensor_frame .

This branch works, thanks.

You can also try with #8. It contains the fix from monitor_sensor_frame branch, and includes many more bugfixes I did recently.

Still works. Is this branch ready for productive use, since you did not merge it yet?

peci1 commented 4 years ago

Thanks for reporting.

I wanted to give the branch some more thorough testing on the real robot, but because of the quarantine, I'm not getting close to it anytime soon. But I at least brough home a Realsense, so I can do the testing with it. Generally, I tried to cover the new changes with tests, so it should be okay. But if your production means real production, I'd wait before it gets better tested. If it's just playing around with robots, you can give it a try and report back any problems. One thing that I fixed and was wrong before was that a lot of places in the code wrongly used fixed frame where the filtering frame should be used. Just make sure you're good with these changes (if your fixed and filtering frames are the same, then it should not mean any trouble).

Also, I'm thinking about providing a set of example configs. Would you mind sharing yours? I assume you have a Velodyne-type lidar?

Martin-Oehler commented 4 years ago

We use a rotating VLP-16. Our config can be found here. We use chained filters to account for different padding requirements.

peci1 commented 4 years ago

Thanks for sharing. In this case, I can imagine the filter works suboptimally, because the TF watchdog runs several times. I was thinking about ways to accomplish sharing the TF watchog between more filter instances, but that would need a custom filter chain implementation...

I was also thinking of adding a parameter that would allow specifying scale and padding separately for each link or collision element. Would you be interested in that solution?

Another idea I have on mind is that if you only use the collision links for this filter, you could inflate the collision elements directly in URDF and then use a single filter. That is actually what we do (well, we have a more complicated way of generating the robot description, so we can choose to have a robot description with this inflation or without it).

Martin-Oehler commented 4 years ago

I was also thinking of adding a parameter that would allow specifying scale and padding separately for each link or collision element. Would you be interested in that solution?

Yes, this would eliminate the need for two of the four chained filters. However, we would still chain two filters to have different padding for containment and ray-cast filtering.

Another idea I have on mind is that if you only use the collision links for this filter, you could inflate the collision elements directly in URDF and then use a single filter. That is actually what we do (well, we have a more complicated way of generating the robot description, so we can choose to have a robot description with this inflation or without it).

We used this approach before, but it makes the description more complicated, so I don't like it very much. I think, we can accept the overhead of using multiple filters if it means that we can avoid this solution.

peci1 commented 4 years ago

8 was merged to master, I'm gonna prepare a release now.