udgcirs / marine_ros_conventions_discussion

A repossitory to discuss about new standard marine messages
11 stars 2 forks source link

merge or differentiate ProfilingSonar and ImagingSonar #2

Open smaria opened 5 years ago

smaria commented 5 years ago

I struggle to identify the difference between the two sonar messages. Both seem to be based on the laser scan messages from sensor_msgs.

Extra fields in ProfilingSonar:

Extra fields in Imaging Sonar:

It appears two are slightly different implementations of the same concept: range_resolution <-> range_increment intensities <-> beams

With my current knowledge of sonar sensing I can not assign a sonar device clearly to either of these messages, and know of no argument why both are needed. As a merge of the two I would suggest https://github.com/smaria/marine_ros_conventions_discussion/blob/master/sensor_msgs/SonarScan.msg

narcispr commented 5 years ago

Profiling sonars measure range. They can be multibeam (several ranges at a time) or mechanical scanning sonars (i.e., they mechanically rotate and provide a range at each angle). Imaging sonars provide an image. It means that for each beam they return hundreds of measures. I think they are different enough to keep them separated. However, I'm not sure about the fields that they have to contain.

tfurf commented 5 years ago

@narcispr, yeah, this can be difficult because some sonars even provide phase information (e.g. each 'cell' is a complex number).

I think writing a message that is generic "enough" in ROS is always going to be hard, given how the ROS msg IDL is limited to very simple patterns (only composition, no sense of inheritance). Limiting the options to ranging vs. imaging systems is a pragmatic approach. (NB that in some ways the profiling sonar is a down-sampling of the imaging sonar --- e.g. a MB return looks like an image but can be processed to be (range, intensity) tuples vs beam angle)

smaria commented 5 years ago

The suggested message type with unified field names can be split back into two, by removing line 32 for the profiling sonar. I believe all other information remains valuable for both cases.

However, I don't recognize an advantage in the split. Whilst more complex sonar data exists, in my experience single- or multi beam intensity data [0] is common enough to warrant being considered in the generic message.

My concern with splitting this into two message types is that packages using the profiling data only would likely also only using the profiling message. This means a typical sensor driver for an intensities capable sonar would have to publish both, leading to a lot of information duplication. A beginner would not only have to identify which of the two message types should be used, but would also need to realize that it may be necessary to use both to be able to use the range data of an intensities capable sonar. The united message would also stay true to its role model, the laser scan, which also features an intensities field that can be left intentionally empty.

[0] What I call multi beam intensity data seems to be what @narcispr refers to as imaging. As far as I can tell, his distinction does not seem to consider single beam sonars (scanning or fixed) that can provide both ranges and intensities, or devices that would be called 'imaging sonars' but also provide a device internal range detection.

ivaughn commented 5 years ago

How much detail are we looking to get into? Some multibeams (Kongsberg EM-series, Reson-7125) make it possible to get per-beam TX time, RX time, and steering angles. This can be especially important with deep water multibeam from the surface as the multibeam can roll/pitch quite a lot in the two seconds it takes to make the round-trip to 1500m. Multi-sector systems also have a time offset between transmitting different beams.

Also, virtually all of these higher-end multibeams support at least equi-distant or equi-angular beam spacing.

Does the broader community have a sense of how important it is for the standard multibeam message to support raytracing a new sound velocity profile?

smaria commented 5 years ago

It sounds like the SonarBeam msg could be enhanced with more optional fields, giving the details for individual beams that @ivaughn suggested, which can be left empty if unused (did I miss any?):

time rx_time
time tx_time
float32 beam_angle  # angle of the sonar beam relative to the zero degree position of the sensor frame

A further suggestion for a field in the main message is range_intensities. This is based on the imagenex deltaT which can return either ranges, or ranges + intensity at that range, or no ranges but the full set of intensities for each beam.

With all the extra optional fields, the message becomes a bit overwhelming for new users, however I am unsure where to draw the lnie for a simpler message. A possible split would be removing the fields

Rather than differentiating ranging and intensity data, this approach differentiates single and multiple beams. This principle could also be further extended by a third message type for sonars with beam positions rotated around two axis (though I don't know enough about those to know if this would indeed work).

Any other split suggestions, arguments for/against splitting, or extra field suggestions?

ivaughn commented 5 years ago

Can I ask what our goal is for multibeam data? In particular, what's the role for a sonar message vs. a more general ROS pointcloud message?

As a rule, multibeam data is stored in one of three formats:

1). Beamformed RX and TX angles + travel time (usually two-way). This is generally not converted to a range because in deep water or when trying to meet International Hydrographic Office standards the conversion involves solving a PDE rather than simply dividing by sound velocity. This format is of interest here for two reasons: first, it's what survey-grade multibeams spit out, and second there are some folks trying to do IHO-style processing. I just can't get them to chime in (C'mon Val!). Reson's s7k datagrams don't actually include a range.

2). Processed point data is typically stored in a pointcloud, admittedly in a pretty confusing coordinate frame (usually local-level but with heading+lat/lon defined in the body frame). Without a compelling reason to do otherwise, I'd suggest we use pointclouds for multibeam data. With a sensible choice of instrument coordinate frame, this becomes a decent way to handle low-end planar multibeams.

Note that we have to use a full XYZ pointclouds because the points from yaw-compensating multibeams are not coplanar.

For details on how this has traditionally been handled, check out Leido's Generic Sensor Format (GSF -- see http://www3.mbari.org/products/mbsystem/formatdoc/gsf_lib.pdf) and MBsystem's native FBT format (http://www3.mbari.org/products/mbsystem/batheditintegration/index.html)

Conventional multibeam processing almost always uses indexed pointclouds. If your multibeam has 256 beams, every datagram should have 256 beams. Users, filtering algorithms, and the sonar itself can then flag beams as invalid.

3). Gridded data. This should ALSO be a thing, but will vary a fair bit based on whether people are using octomap, conventional grids, etc etc. Another topic for another issue.

smaria commented 5 years ago

Interesting! The proposed message is based on the existing suggestions (which in turn both seem to be based on LaserScan), and my (limited) experience using mutlibeam data for obstacle avoidance and mapping. It sounds like we need more message options for multibeam data, depending on the further processing & use of the data.

It seems to me like the proposed message is useful, but cannot cover all use cases. I would suggest to open a new issue discussing further multibeam sonar messages (#9), and focusing this issue back on the suggested message, how/if it should be split and if it is missing any information. When to use existing message types and when to define our own is a general question, that we will likely run into repeatedly. I added some general notes to #4.

pvazteixeira commented 5 years ago

(apologies if this is not the right issue/place for this comment)

@ivaughn IMO the multibeam sonar message should be closer to what you're describing in (1) - data containing the intensity over time/range for each beam, as this would not prevent the user from running whatever processing is needed on the raw data. Additionally, a ROS driver for the multibeam could publish a separate range/point cloud message in parallel (something similar to LaserScan).

Regarding your point on reference frames - and maybe overlapping with #7 - I would go in the same direction as @smaria 's proposed message specification, where a multibeam message contains an array of single-beam Sonar messages, each carrying its own header (and reference frame). I would also suggest to change SonarBeam to be more "self-contained", so that it can be used with single-beam sonars (e.g. depth sounders or mechanically-scanned profiling sonars), as well as accommodating beams that are not coplanar and/or equally spaced.

# single beam sonar measurement
Header header

float32 azbw             # beamwidth in azimuth/y [rad]
float32 elbw             # beamwidth in elevation/z [rad]
float32 frequency        # [Hz]

float32 range_min        # minimum range value [m]
float32 range_max        # maximum range value [m]

float32[] range          # range data [m] (Note: values < range_min or > range_max should be discarded)
float32[] intensity      # intensity data [device-specific units].  If your
                         # device does not provide intensities, please leave
                         # the array empty.

Another advantage of moving some of the complexity to this single-beam message would be that you would potentially no longer need separate messages for profiling and imaging multibeams. In the past, I have used a similar LCM specification with both imaging and profiling multibeams.