peci1 / robot_body_filter

Filters the robot's body out of point clouds and laser scans.
BSD 3-Clause "New" or "Revised" License
82 stars 21 forks source link

Why should the user set the sensor_frame #1

Closed reinzor closed 4 years ago

reinzor commented 4 years ago

First of all thanks for this filtering component!

I have a filter chain set-up with incoming messages from different sensors with different frames. The filter component requires that I set a frame_id on forehand, why? Why not just use the header.frame_id to transform the incoming message?

peci1 commented 4 years ago

You're right, the sensor frame could be at least made optional. I still wanted to keep the possibility that e.g. the pointcloud is transformed before it gets to the filter, and the filter needs to know the position of the sensor. So maybe there could be some logic that would look at this parameter, and if it's empty, then it'd assume the incoming scan/cloud is in the sensor frame.

If you're streaming multiple sensors into the filter, there's a limitation of the current implementation of the filter, that you need to filter all the scans in the same filtering frame. Is this acceptable for you? I'm not really a fan of adding multiple filtering frames, because that would make the already complex configuration even worse... But in that case I think you can set up multiple filters, one for each filtering frame.

I'll definitely have a look at this, thanks for the idea.

reinzor commented 4 years ago

Thanks for the response. The same filtering frame would definitely be acceptable. However, I think the filtering frame parameter is also redundant since you could just take the root of the robot_description. If I look a little close I think all frames can be derived from messages / other parameters:

frames/fixed: Root of URDF tree
frames/sensor: Incoming message
frames/filtering: Root of URDF tree
frames/output: Same as incoming message

This way no assumptions about the set-up is made and we can reduce the number of parameters.

peci1 commented 4 years ago

Well, you're simplifying a little bit too much. E.g. in our case, with the mobile robots carrying slowly rotating lidars, base_link cannot be used as fixed frame, because it simply isn't fixed (the robot moves during scan acquisition). So in our case, we set odom as the "nearest" fixed frame to the robot (even better would be map, but that is a pretty slow TF in our system).

The autodetection you're suggesting is in fact creating more assumptions about the setup than the current implementation. I tried to set the defaults so that in the simpler cases you wouldn't have to set most of the frames.

E.g. in the point-by-point version, if you don't set anything, you get fixed = filtering = output = base_link, which I think is reasonable. In the whole-scan-at-once version, you get fixed = base_link and sensor = filtering = output = laser.

Outputting the cloud in the same frame in which it came was also one idea I had, but then I figured out that you anyways end up with a scan in the filtering frame, and converting it to another frame means more computations. If you really need it, you can always set the output frame to a non-default value. What I could do is implement a kind of magical value, e.g. INPUT_FRAME, which would mean that the output cloud should be transformed to the frame corresponding to the incoming message. That, together with sensor frame autodetection, could satisfy your use-case, couldn't it?

reinzor commented 4 years ago

Thanks for the clarification. I just made a PR that should auto-detect the incoming message sensor frame so that the user does not have to specify that. This PR should still cover the use-cases you described (correct me if I am wrong).

peci1 commented 4 years ago

Fixed in #2. Thanks for your time and contribution!