osrf / lrauv

Packages for simulating the Tethys-class Long-Range AUV (LRAUV) from the Monterey Bay Aquarium Research Institute (MBARI).
Apache License 2.0
58 stars 13 forks source link

Port DAVE DVL to Ignition #145

Closed mabelzhang closed 2 years ago

mabelzhang commented 2 years ago

This ticket outlines the options to help us prioritize how much of the DAVE DVL to port.

The NPS DAVE DVL is based on the WHOI ds_sim DVL. There are two conceptual parts to it:

  1. Bottom tracking. This exists in the WHOI ds_sim DVL ds_sim DVL (master branch on DAVE's fork, I think. Double-check with NPS): https://github.com/Field-Robotics-Lab/ds_sim/blob/master/gazebo_src/dsros_dvl.cc https://github.com/Field-Robotics-Lab/ds_sim/blob/master/src/dsros_dvl_plugin.cc

    • Porting rays: There are 4 beams, implemented using a Gazebo-classic object (physics::RayShape?) to shoot cones out and check the object of intersection. This is done in ODE, which has a flag that does collision checking but won't enforce contact constraints. To port to Ignition, we need to see if DART supports reporting contact point without enforcing constraints. It is similar to how SonarSensor in Gazebo-classic is implemented, which has not been ported to Ignition. If feasible, we might want to port that upstream, then reuse the code. Another relevant sensor that might come up, RaySensor, has also not been ported. (Thanks @scpeters for the insights. Hope I paraphrased correctly.)
  2. Water tracking and current profiling. This is added in DAVE. DAVE DVL (ds_sim DVL plus water tracking and current profiling, nps_dev branch): https://github.com/Field-Robotics-Lab/ds_sim/blob/nps_dev/gazebo_src/dsros_dvl.cc https://github.com/Field-Robotics-Lab/ds_sim/blob/nps_dev/src/dsros_dvl_plugin.cc

    • Porting currents, on top of porting current profiling: This version of the DVL further depends on the NPS fork of the uuv_simulator repo, which adds currents (Double-check with NPS which branch). That means, to port this DVL, NPS's ocean currents addition to uuv_simulator also need to be ported, which is not trivial.

If we don't need water tracking, we only need to port bullet 1, the ds_sim version.

Documentation on DAVE DVL https://github.com/Field-Robotics-Lab/dave/wiki/whn_dvl_examples https://github.com/Field-Robotics-Lab/dave/wiki/DVL-Water-Tracking https://github.com/Field-Robotics-Lab/dave/wiki/DVL-Seabed-Gradient

arjo129 commented 2 years ago

For the Ray/Beam tracing we could alternatively use ign-rendering's RayQuery to query the depth of various objects.

This discussion on CPU based ray collisions for CPU-Lidar (which may be relevant to us) can also be found here: https://github.com/ignitionrobotics/ign-sensors/issues/26

@chapulina outlines the need to create a Ray shape in ign-physics.

mabelzhang commented 2 years ago

Yeah that ign-sensors#26 is the same ticket as the RaySensor linked in the OP above. That sensor is the basis for some other sensor in DAVE, I think. The SonarSensor and RaySensor are different enough though, we might want to think about which one and why. The SonarSensor has some known issues too (linked from a comment in the close-the-gap ticket linked in OP).

arjo129 commented 2 years ago

Good news is Dart does have Cone shapes which I suppose can be abused as rays https://dartsim.github.io/dart/v6.12.1/de/d3e/classdart_1_1dynamics_1_1ConeShape.html

chapulina commented 2 years ago

For the Ray/Beam tracing we could alternatively use ign-rendering's RayQuery to query the depth of various objects.

+1 to this, I'd recommend going with the rendering approach unless there's an explicit need to use physics. Physics-based ray sensors are notably slower. The only reason I can think of to use them is to avoid the need for a GPU, but Ignition features like EGL allow us to work around that.

braanan commented 2 years ago

Thanks for looking into this @mabelzhang. There's no immediate use case for waster speed sensing atm, but I suspect that's something we'll want at some point. LRAUV currently only supports water mass speed measurements for defined bin using PD13 format, but at some point, we'd like to also support full ADCP water speed via PD0. When/if we go down that route I'd like to integrate the current readings from our existing data interface rather than supporting a new interface and adding dependencies.

It would be nice to use the DVL message types defined in https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/Dvl.msg, but that's not a requirement.

mabelzhang commented 2 years ago

Re physics vs rendering: I ran into some glitches with the collision geometry for heightmaps actually, that I had to disable the collision and only use the visuals. I didn't dig into it much, but it appeared that the robot was colliding with invisible things, when the heightmap was far below it, though the upper bounding box of the heightmap intersects with the robot. I don't know if that's fixed with the new DEM feature.

+1 for using the hydrographic_msgs types. It would be a good example of early adoption. The messages were recently created as part of a community effort to standardize maritime sensor messages, and they've consulted Open Robotics about propagation and adoption. If we run into problems, we can give them feedback. On the other hand, if we upstream the DVL, then we might think a bit about dependencies and how stable these message types are going to be, for future maintenance.

scpeters commented 2 years ago
  • There are 4 beams, implemented using a Gazebo-classic object (physics::RayShape?)

yes, it looks like a physics::RayShape to me

scpeters commented 2 years ago
  • Porting rays: There are 4 beams, implemented using a Gazebo-classic object (physics::RayShape?) to shoot cones out and check the object of intersection. This is done in ODE, which has a flag that does collision checking but won't enforce contact constraints. To port to Ignition, we need to see if DART supports reporting contact point without enforcing constraints. It is similar to how SonarSensor in Gazebo-classic is implemented, which has not been ported to Ignition. If feasible, we might want to port that upstream, then reuse the code. Another relevant sensor that might come up, RaySensor, has also not been ported. (Thanks @scpeters for the insights. Hope I paraphrased correctly.)

yes, it looks like a physics::RayShape to me

* https://github.com/Field-Robotics-Lab/ds_sim/blob/master/gazebo_src/dsros_dvl.hh#L81

ok, as I look at it more closely, it seems that this plugin was experimenting with both the RaySensor (physics::RayShape) and SonarSensor (3D collision shape with collide_without_contact) approaches. It is currently using the RaySensor approach though there are still some vestiges of the SonarSensor approach:

For the Ray/Beam tracing we could alternatively use ign-rendering's RayQuery to query the depth of various objects.

+1 to this, I'd recommend going with the rendering approach unless there's an explicit need to use physics. Physics-based ray sensors are notably slower. The only reason I can think of to use them is to avoid the need for a GPU, but Ignition features like EGL allow us to work around that.

the other significant difference is that ign-rendering's RayQuery will interact with Visual objects, while physics-based ray or collide-without-contact sensors will interact with Collision objects. This is a significant factor to consider if the collision and visual shapes are not identical in a given world.

Good news is Dart does have Cone shapes which I suppose can be abused as rays https://dartsim.github.io/dart/v6.12.1/de/d3e/classdart_1_1dynamics_1_1ConeShape.html

the collide-without-contact approach can be used with arbitrary 3D shapes, but they are not guaranteed to return the closest point to the sensor. The collision detection algorithm may return a point inside the overlapping volume, so further investigation of the narrow-phase collision algorithms may be needed

mabelzhang commented 2 years ago

Thank you Steve for looking into the details!

Re RaySensor: that makes sense. For context, I remember reading in the DAVE wiki that the RaySensor approach is used for more than one custom sensors.

Here's a page from the DAVE wiki making detailed comparisons between RaySensor and SonarSensor for underwater sonars https://github.com/Field-Robotics-Lab/dave/wiki/A-Gazebo-Ray-vs-Gazebo-Sonar-comparison "We concluded that the ray sensor could be used to calculate beam intensity while the Sonar sensor, which detects mesh collision, could not."

I definitely think porting something like this should involve a few verbal exchanges with the DAVE team, rather than us going in point blank to port it and use alternatives that they might have already looked into and decided were substandard.

hidmic commented 2 years ago

(Wrote this yesterday, but forgot to post it). Circling back to this. @arjo129 and I had a quick sync the other day. Current plan of record is to use depth camera frames to sample distances to visuals. We can then try to find the objects within FOV along with their velocities (or try and model acoustic propagation). That'd be enough to replicate the DVL implementation in ds_sim.

I took a quick look at Ignition Gazebo/Sensors architecture for rendering and custom sensors, in hopes we can build atop. There's nothing special about custom sensors beyond some SDF conventions. Rendering sensors, on the other hand, do get special treatment. To build a custom, depth camera-like custom sensor we would need to extract and re-purpose some of the functionality contained in the Sensors system and RenderUtil class. Tricky, but doable.

What's still bugging me is how are we going to match points with (object's) velocities efficiently. We could perform ray queries and then reverse lookup links by visual object IDs (which I presume is possible but haven't found a way yet) but I suspect that's going to be an expensive operation. I'll sleep on it.

arjo129 commented 2 years ago

As I mentioned one option would be to apply a "velocity texture" of some form to each object. Then we could retrieve it in a single pass. I think we can solve the other problems first then revisit this to make it fast.

On Sat, Apr 30, 2022, 10:55 PM Michel Hidalgo @.***> wrote:

(Wrote this yesterday, but forgot to post it). Circling back to this. @arjo129 https://github.com/arjo129 and I had a quick sync the other day. Current plan of record is to use depth camera frames to sample distances to visuals. We can then try to find the objects within FOV along with their velocities (or try and model acoustic propagation). That'd be enough to replicate the DVL implementation in ds_sim.

I took a quick look at Ignition Gazebo/Sensors architecture for rendering and custom sensors, in hopes we can build atop. There's nothing special about custom sensors beyond some SDF conventions. Rendering sensors, on the other hand, do get special treatment. To build a custom, depth camera-like custom sensor we would need to extract and re-purpose some of the functionality contained in the Sensors system https://github.com/ignitionrobotics/ign-gazebo/blob/9927a26287cfdb7c584b9fec334c994ae09cac0f/src/systems/sensors/Sensors.cc and RenderUtil class https://github.com/ignitionrobotics/ign-gazebo/blob/9927a26287cfdb7c584b9fec334c994ae09cac0f/src/rendering/RenderUtil.cc. Tricky, but doable.

What's still bugging me is how are we going to match points with (object's) velocities efficiently. We could perform ray queries and then reverse lookup links by visual object IDs (which I presume is possible but haven't found a way yet) but I suspect that's going to be an expensive operation. I'll sleep on it.

— Reply to this email directly, view it on GitHub https://github.com/osrf/lrauv/issues/145#issuecomment-1114001673, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEEMQEGZYDHBCOIYDAZ3E3VHVCVVANCNFSM5L5XE77A . You are receiving this because you were mentioned.Message ID: @.***>

braanan commented 2 years ago

http://www.teledynemarine.com/Pathfinder_DVL?ProductLineID=34