ros-navigation / navigation2

ROS 2 Navigation Framework and System
https://nav2.org/
Other
2.62k stars 1.31k forks source link

Using 3D environmental meshes in navigation layer #1461

Closed ruffsl closed 4 years ago

ruffsl commented 4 years ago

I'm working with a VIO SLAM pipeline (e.g. Kimera) that can be used to generate 3D meshes of the environment, and would like to ask the best approach to utilize this world model representation for the navigation planning layer. I can think of a few conventional approaches that I'll mention later, but would prefer to make full use of the mesh representation or avoid costly model transformations.

The most conventional approach I can think of is discretizing the world mesh into 3D voxels or an elevational projection into 2D occupancy grids by extrapolating the vertex coordinates in the mesh as if they where point clouds. However this throws away the plane/face information of the mesh that could be used in planners that can exploit ray-tracing or line-plane intersection, and could lead to approximation errors when vertices of the mesh are sparse or largely spaced apart.

SteveMacenski commented 4 years ago

A couple questions and a few comments. All pretty off the cuff.

Q: What way would you like to "utilize" the model? First I'd say the 2 (main) uses are for planning and control, but also for localization if you like, which seems like the most clear use-case for this type of information. Because this is VIO and not a SLAM, its going to be prone to drifting. If you're proposing to have this be the global-representation of your world youd like to plan in, you'll need to be aware that your positioning system will need to be based on the same warped coordinate system or else things will probably go out of wack (ei the positioning coordinates no longer map 1:1 to your odometric-3d-mesh coordinates). In doing so, I think your application will have problems and this won't scale to even moderately large spaces.

Q: Is your goal to navigate in 3d or 2d? If you're moving in more or less a plane, you may want to downsample this to 2D pixels or 3D voxels just to be more regular and then are able to use the normal toolset. I suppose from your explanation, your planning space (not really costmap, but another representation of C-space) I don't know you need to concern yourself with raycasting if you're not just going to check normals/gradients/heights for traversibility in your planning in 2D. 3D may be different. I'd say there are no planners right now in the ROS ecosystem that I'm aware of that uses the more graphics approaches you mention, but if you know of some feel free to point them out to me. I think the choice here seems use-case specific in what you're trying to accomplish. My feeling is that since these meshes are dense, and probably stored in memory, this method doesn't scale well to begin with, so the planning space is sufficiently small that all of the above would work fine. If you would like to use that information, my follow-up is "why"? I know you're throwing out good information, but I'm more asking if there's a genuine requirement making you have to think about these things or if its a case of "I have it so I want to use it" or the conventional methods aren't good enough based on some tangible requirement.

Q: Can you give more context on the project? This sounds interesting. We haven't yet done much work in Nav2 on other representations and the interfaces to algorithms for them. Mostly just haven't gotten to it yet.

C: I don't think that turning it into an elevation or gradient map would actually require you to throw out the normals or other related information. You can just create a struct containing it stored at each node. I think that's what grid_maps does anyhow.

SteveMacenski commented 4 years ago

Closing -- we can continue discussion but there isn't really an action item here.

ruffsl commented 4 years ago

Closing -- we can continue discussion but there isn't really an action item here.

Sure thing, we can keep this a discussion until we come up with something more concrete.

@SteveMacenski , I've opened some tickets directly over on Kimera to connect with the project maintainers about these ideas. As an author of slam_toolbox, I figured you might also be interested as well. I'm thinking about non-planner SLAM for non-planner path planning.

SteveMacenski commented 4 years ago

Got it - sorry I didn't know things were happening behind the scenes. I figured this ticket was forgotten about.

Also a note, I had a short meeting with the CEO of anybots over the weekend talking about porting grid maps and elevation mapping over to ROS2. Part of the goal of that work would be to replace costmap2d with it so that we have a terrian mapping solution to work with.

Accordingly, I would recommend anything you do with Kimera to think about first-class support with grid maps. Both because of the above, and also because grid maps is the most popular generic terrain representation method I am aware of.

ruffsl commented 4 years ago

Because this is VIO and not a SLAM, its going to be prone to drifting.

The project's name, Kimera-VIO, is a bit odd but does include SLAM capabilities. Particularly once the LoopClosureDetector module is enabled: https://github.com/MIT-SPARK/Kimera-VIO/blob/master/docs/tips_usage.md

You may want to check out the recent publication detail the architecture: https://github.com/MIT-SPARK/Kimera-VIO/blob/master/README.md#publications

Is your goal to navigate in 3d or 2d?

I'd like to eventually reach 3D, as I have a few project's I'm working up to that could use it, like when the world isn't inherently planar as with outdoor mobile robots. But one of my more immediate tasks is for 2D indoor planar navigation. In either case I'd like to apply VIO SLAM pipelines that are inherently 6DoF.

If you would like to use that information, my follow-up is "why"? I know you're throwing out good information, but I'm more asking if there's a genuine requirement making you have to think about these things or if its a case of "I have it so I want to use it" or the conventional methods aren't good enough based on some tangible requirement.

Its little of column A, column B, I'd say. For one, the sensors required, 2D cameras + IMU, are cheaper than conventional 2D LIDAR sensors, even more so for 3D LIDARs, and are also often more lighter/smaller/robust. Though the latest RealSense LIDAR may be pushing these boundaries:

https://www.intelrealsense.com/lidar-camera-l515

Regardless, the planer limitations are kind of limiting in other domains, such as ROV, areal. But would you have a links of using grid maps (meaning voxel grids?) and elevation mapping in ROS1 navigation as a reference.

SteveMacenski commented 4 years ago

I'm familiar with Kimera. At some point last year I did a dive into it to see what was up - I forget at the moment why I didn't explore it more.

I'd like to eventually reach 3D

We're on the same page here. Actually the 3D navigation part of that problem is pretty closed-form; I know its something I could do given time and resources. What I'm a little more skeptical of is the 3D generalized positioning system and map representation for localization (and localizer).

If you use any of the 2D methods, they output the same map format, for the most part. That's very untrue of the dense, visual, or gradient methods. Its the wild west out there. It makes it hard for me to justify dumping too much time in any individual one if it doesn't work out, then any Nav work I do will be burned. If we can solve that problem, I'm very gun hoe about moving Nav entirely to gradients and 3D planners. Just "ignoring" the positioning thing is what my engineering brain says, but my particle brain says "then all this work will be wasted". Gradient-world could still be valuable for planar robots, but it would be naive to ignore 3D SLAM and localization to be practically usable in 3D environments.

RealSense LIDAR

I'd like to highlight, and I've had this discussion alot, that in nearly no respect is this a LIDAR. Its a ToF depth camera analogous to the Picoflexx, IFM, Meere, and dozens of others. Its just a depth camera. Intel is just trying to use buzz words and because their traditional realsense depth tech isn't great by comparison.

ruffsl commented 4 years ago

If you use any of the 2D methods, they output the same map format, for the most part. That's very untrue of the dense, visual, or gradient methods. Its the wild west out there.

True. I haven't yet seen victor format arise, but I also haven't delved deep into that topic.

I'm very gun hoe about moving Nav entirely to gradients and 3D planners.

Haven't seen much recently around using gradients for planning, mainly just from old textbooks, e.g:

https://books.google.com/books?id=S3biKR21i-QC&lpg=PA91&ots=blANP7VFWT&dq=ronald%20arkin%20%22gradient%22%20planner&pg=PA91

Are there many frameworks that make use of gradients for planning, or just motion control?

that in nearly no respect is this a LIDAR. Its a ToF depth camera

Hmm... I agree on the market capitalization of buzzwords, but I've always seen sensors that measure the Doppler shift, or changes in phase of the reflected light as LIDARs, be they scanner or scannerless, 1D, 2D, 3D, etc. Time of flight cameras using CMOS, IMO, are merely a part of the broader class of LIDARs. But it's definitely different from previous tech using structured light or optical parallax from global shutters.

Still, it's a convenient package for hardware synchronized global shutter stereo pair + 6DoF IMU with an active SDK. I only wish the SDK/firmware interpolated the Acc/Gyro motion frames or synchronize there sample rates. Could still do with a more affordable option that skip the IR projector.


Edit: it uses a scanting pastern instead of illuminating a whole scene at once; guess that saves power.

The Intel® RealSense™ LiDAR Camera L515 uses an IR laser, a MEMS, an IR photodiode, an RGB imager, a MEMS controller, and a vison ASIC. The MEMS is used to scan the IR laser beam over the entire field-of-view (FOV). https://www.intelrealsense.com/download/7691

ToniRV commented 4 years ago

@ruffsl thanks for pointing me to this thread, very interesting discussion! Let me just add that Kimera will soon support global 3D path-planning, either using the volumetric ESDF (slow honestly), or using a skeleton of the free-space (topological, quite fast) using ETH's work https://github.com/ethz-asl/mav_voxblox_planning Screenshot from 2020-01-31 02-16-11 Above is a 2D slice of the 3D ESDF and the skeleton map.

ruffsl commented 4 years ago

Nice! I just read through ETH's 2019 paper using voxblox_planning, wasn't aware of this work before. I may read through the rest of Dr. Helen Oleynikova's thesis when I get the chance. I suspect mobile ground based robots could leverage the same topological planning by adding sampling constraints to search for freespace trajectories along the "ground" manifold.

SteveMacenski commented 4 years ago

Haven't seen much recently around using gradients for planning, mainly just from old textbooks, e.g:

Sorry, I was unclear, I mean planning with gradient (ie terrain) representations of the space. I suppose it could be a mesh instead, but I think the mesh should be converted into a representation gradients anyhow.

Time of flight cameras using ... are merely a part of the broader class of LIDARs.

If that's your definition, then fair enough, this falls into the class of LIDARs to you. For me, I don't consider it a lidar unless its long range and ToF is one such way you can make a lidar. There are many others. If you consider the Picoflexx and IFM cameras lidars, then I think its far to call this one too.

They need to not have their logo on the front glass though. That's a super serious non-starter for me. You have all the real-estate on the back Intel, use it. I'm using your camera to promote my product, not yours. My wheel vendor doesn't leave their logo patterns in the tire tracks.

@ToniRV, I know I'm probably stepping into a larger discussion you guys are having - but when you say that, do you mean path planning of a drone (ie flying around a 3D mesh) or planning of a ground robot (ie in contact with a 3D mesh estimating surface traversibility)?

God, those ASL and RSL guys are awesome. They put out such great work. If I ever went back for a PhD...

ToniRV commented 4 years ago

@SteveMacenski sorry I meant path planning of a drone (ground robot should be also possible using this repo instead https://github.com/anybotics/elevation_mapping I guess, but we don't support it currently).

SteveMacenski commented 4 years ago

Yup, that's the medium-term direction I'm going with this project. I was just curious since your links seemed more drone related but (at least my interpretation of) this project is more ground robot related.

I met with Péter briefly last week and probably going to work on porting grid_maps and elevation_mapping to ROS2 for this.