ros-planning / navigation_experimental

Experimental navigation techniques for ROS robots.
304 stars 140 forks source link

[ROS2 Navigation Inquiry] A few technical questions and a few motivational questions #44

Closed SteveMacenski closed 3 years ago

SteveMacenski commented 4 years ago

Hi,

I'm a maintainer on the ROS2 Nav stack and I'm in the middle of a trade study to remove naive grid search as the default planner (it has a place and time, will be continue to be maintained but hopefully not as widely used).

On this journey, my requirements are to support a larger range of arbitrary vehicles. Diff drive and omni have been reasonably supported but lacking support for ackermann style robots. There's undoubtedly other strange options as well but those are my 3 motivating classes. Moreover, support for a wider class of irregular base shapes (ei completely remove any reference to "circular" related assumptions).

I've been exploring what the functional state of the art is for unconstrained autonomous driving applications -- knowing that if it supports ackermann, I can create a simplified motion set for differential drive and omnidirectional platforms. I am a bit lured in by Hybrid-A* but in analyzing the DARPA results, I'm seeing that wow, that lattice planner thing is looking pretty good too. There's been a few more algorithms I'm looking at, but these are the two that have stuck out to me as both good core candidates with some extensions / optimizations that can be made for embedded platforms. I've known this implementation has been hanging around and seemed relevant to engage here.

Motivational questions:

Technical questions:

Moving into the future, I would like to remove costmap 2d and replace it with some variation on grid maps and elevation mapping. I have an eye for implementing planners / controllers that I can translate from 2.5D into 3D(ish) with traversibility maps. I think this is a fine enough candidate, though I haven't looked at SBPL internal library at length to know if the "costs" per cell are specific in meaning.

While I’d enjoy the challenge of writing this myself, having something that someone can vouch for as a starting point is a big force multiplier to get something out the door.

mintar commented 4 years ago

Hi Steve,

thanks for reaching out and sorry for the late reply!

I'm a maintainer on the ROS2 Nav stack and I'm in the middle of a trade study to remove naive grid search as the default planner (it has a place and time, will be continue to be maintained but hopefully not as widely used).

+100 to that. My use case at the moment is a MiR100 diff-drive robot navigating in an office environment. It is rather large for an indoor robot, with a width just below typical door width and a length greater than that. Additionally, we use it as a base for a mobile manipulator, so we need to go to goal poses that are very close to obstacles (such as tables with objects to be grasped), which requires a lot of "parallel parking". In this context, SBPL has been invaluable, and grid search failed completely.

I've released the mir_robot package into Kinetic + Melodic, and I encourage you to play around with it yourself to get a feel for SBPL. Simply apt install ros-melodic-mir-robot and look at the "Gazebo demo (existing map)" section in the README:

https://github.com/dfki-ric/mir_robot

Some SBPL features that I've used here are:

You'll notice that SBPL is quite slow (more on that later). One reason is that I've set the allocated_time and initial_epsilon parameters to a rather large value. If you reduce them, SBPL is much faster, but then it will fail to find a path on very large (over a hundred meters square) maps. You can play with it yourself:

diff --git i/mir_navigation/config/sbpl_global_params.yaml w/mir_navigation/config/sbpl_global_params.yaml
index 259965f..8fdc0b4 100644
--- i/mir_navigation/config/sbpl_global_params.yaml
+++ w/mir_navigation/config/sbpl_global_params.yaml
@@ -2,8 +2,8 @@ base_global_planner: SBPLLatticePlanner
 SBPLLatticePlanner:
   environment_type: XYThetaLattice
   planner_type: ARAPlanner
-  allocated_time: 30.0
-  initial_epsilon: 50.0
+  allocated_time: 10.0
+  initial_epsilon: 15.0
   forward_search: false
   nominalvel_mpersecs: 0.8
   timetoturn45degsinplace_secs: 1.31 # = 0.6 rad/s

On this journey, my requirements are to support a larger range of arbitrary vehicles. Diff drive and omni have been reasonably supported but lacking support for ackermann style robots. There's undoubtedly other strange options as well but those are my 3 motivating classes. Moreover, support for a wider class of irregular base shapes (ei completely remove any reference to "circular" related assumptions).

Those are all good reasons. FWIW, I've used SBPL to transport a 8.4 meter long work piece on top of a MiR100 robot. Not a good idea for obvious reasons, but SBPL worked great. This is about as far from a circular footprint as you can get. Also, we've used SBPL successfully on a 46-ton Ackermann-style sugar beet harvester.

I've been exploring what the functional state of the art is for unconstrained autonomous driving applications -- knowing that if it supports ackermann, I can create a simplified motion set for differential drive and omnidirectional platforms. I am a bit lured in by Hybrid-A* but in analyzing the DARPA results, I'm seeing that wow, that lattice planner thing is looking pretty good too. There's been a few more algorithms I'm looking at, but these are the two that have stuck out to me as both good core candidates with some extensions / optimizations that can be made for embedded platforms. I've known this implementation has been hanging around and seemed relevant to engage here.

Motivational questions:

  • Would you say that SBPL lattice planner is ready for "prime time"?
  • Anything you think I should know or caution for/against before I start?

I think SBPL is quite stable and reliable, however it is missing some features. I know that several companies are using it for their commercial robots:

However, what happened in both cases is that they find the open-source version of SBPL missing some critical features, so they make a proprietary fork of it where they implement them. I think the main missing features are:

  1. SBPL is slow. Some ways to tackle this:
    • In intralogistics applications, the robot often plans the same path between the same start and goal poses over and over again. Intelligent caching helps a lot.
    • Adaptive management of the initial_epsilon parameter (see above).
    • SBPL often spends > 90% of the search time on the final approach; people have dropped hints that there are ways to optimize this.

I've heard that people achieve 200ms average planning times using all those optimizations, but I don't know if that number includes caching, so take it with a grain of salt. The upshot is that SBPL can be made really fast, although the current open-source implementation isn't. For my applications, it doesn't bother me that much; however, it precludes constant periodic replanning during execution (which is nice to have).

  1. Sometimes SBPL produces slightly awkward paths:
    • If no combination of the curved motion primitives ends up in exactly the grid row where the target pose is, SBPL will produce a small unnecessary zigzag that could be smoothed out: sbpl_zigzag

However, these problems are rather cosmetic and could easily be filtered out by a post-processing step.

  • It seems to me hybrid-A* has received alot more love by the autonomous driving community (outside of constrained environments like on roadways).

I have no experience at all with Hybrid-A*, so I cannot compare it to SBPL, but it also looks promising. FYI, there's an implementation here that has some nice videos, but I haven't tried it out yet.

  • Is it worth looking into 4D space to include a velocity component as proposed by CMU’s DUC entry?

Sorry, I don't know. I haven't missed it so far, since I haven't tried to control autonomous vehicles at higher speeds yet.

Technical questions:

  • I see SBPL is basically dead. Is this because it functionally just works or has it been abandoned by authors and maintainers? Do you see that being a problem in say 1-5 years?

It's already a problem now. The code does indeed "just work", but as I wrote above there are tons of small improvements that can be made which will improve the speed by several orders of magnitude when combined. I know for a fact that @corot, who was working at Magazino at the time, planned on pushing a lot of Magazino's proprietary improvements upstream, but gave up after his first pull request was ignored.

  • I did a quick glance through SBPL. If we're trying to support our uses specifically, I feel that it wouldn't be an insurmountable task to rewrite the parts we need in a simpler form -- is this something you have feelings about?

I don't have any deep experience with the SBPL code, but @corot has. If you have any specific questions, I'm sure he can help.

  • I see from the video demo that there are pretty sharp corners with moving between straight line sections. This makes me look back at the paper I linked to above and look at Figure IV and wonder "to have a regular lattice, it must follow: 360/(2*N) therefore limiting the resolution”. Is that true for this implementation? If so, what is N (or is that in the configuration script).

I think 2*N == numberofangles here:

https://github.com/sbpl/sbpl/blob/4d654845aa3b92aa1b4282a342e9011bac95aeb9/matlab/mprim/genmprim_unicycle.m#L45

FYI, the numberofangles = 16 here result in the 22.5° discretization I mentioned above. Obviously, the more motion primitives you have, the better the discretization, but also the higher the search time. In my mir_robot repo, I currently use 112 motion primitives.

  • I haven't looked at the matlab script - any knowledge about how challenging it would be for an intern to translate it to python and make a nice GUI around it?

I also hate the Matlab scripts and think they should be replaced. FYI, they run just fine on GNU Octave, so you don't need a Matlab license. Converting them to Python shouldn't be too hard. In fact, somebody already started this a couple of years ago, but I think they didn't quite finish it:

https://github.com/ros-planning/navigation_experimental/compare/kinetic-devel...mintar:CNURobotics-python_prim

Moving into the future, I would like to remove costmap 2d and replace it with some variation on grid maps and elevation mapping. I have an eye for implementing planners / controllers that I can translate from 2.5D into 3D(ish) with traversibility maps. I think this is a fine enough candidate, though I haven't looked at SBPL internal library at length to know if the "costs" per cell are specific in meaning.

While I’d enjoy the challenge of writing this myself, having something that someone can vouch for as a starting point is a big force multiplier to get something out the door.

I believe SBPL is an excellent starting point. @corot : anything to add?

Also, maybe you should have a look at orunav_motion_planner. It's a complete rewrite of a lattice-based motion planner. It's actively maintained, stable (in use for several years), and I've seen some impressive live demos of it on autonomous forklifts. If you're interested, you should talk to @h72a65.

corot commented 4 years ago

Hi! Thanks both for this interesting analysis. I think @mintar explained all crystal clear. This is another sad case of an interesting project dying of lack of maintenance. A couple of remarks:

Btw, @SteveMacenski, great job with Nav2! Great u adopted BTs. Next is to convince Groot guys to port it to ROS2!

SteveMacenski commented 4 years ago

If you reduce them, SBPL is much faster, but then it will fail to find a path on very large (over a hundred meters square) maps.

That's not a very encouraging sign from my side. Some of the motivating problems I think about are how do we work in very large (scale 300,000 sq.ft.+) and dynamic (people, other robots, changes to environment) spaces. It would be obviously not awesome if a user failed to compute paths. Its very common for robots to be in 30,000 m2 spaces, I consider that to be the target size with an eye of 80,000m2 in the future with effort.

Also, we've used SBPL successfully on a 46-ton Ackermann-style sugar beet harvester.

That's encouraging though :-)

Intelligent caching helps a lot.

That is a "future goal" type of thing in my mind. I'd like to try as hard as I can to get things working such that that isn't necessary, but would be nice. that kind of breaks a bit of my dynamic environment theses above.

Adaptive management of the initial_epsilon parameter (see above).

I'll have to look into that.

SBPL often spends > 90% of the search time on the final approach; people have dropped hints that there are ways to optimize this.

I hope "people" will speak up :wink:. I know for hybrid-A* they have 2 sets of heuristics for expansion: L2 distance and a cached look-up table of "on approach" trajectories to help with exactly this problem. They claim it gave them an order of magnitude speed up, but it does seem like a pain to implement. I'd love to hear other thoughts or methods. Can you give me a feeling for how much time it takes you to compute paths of a certain length (Nothing too scientific, just orders of magnitude). 200ms for a poorly defined "long path" is kind of the middle-ground of what I would accept. I was targeting 100-300ms.

The zig-zag and 22.5 degree problem are things I've thought about and one of the major concerns of mine for Lattice. I alluded to some formulation from my understanding of the literature that this aligns with. From just a "ball park", I'd say that to make this the recommended planner, I'd have to have the resolution at least another division to 11.25. 22.5 degrees would be insufficient for logistics I imagine. 11.25 is on the same range as the default parameters for orientation tolerance in the nav stack. Still in that 100-300ms timeframe, ideally. The idea being that the idea of the orientation divisions are low enough that people functionally dont need to worry about it or even know about it.

Hybrid-A* also contains an CG optimization step to smooth out that path. I'm thinking now that this is probably a step I should make a general ROS solution to because its come up a few times with SBPL/OMPL/etc. That doesn't come "free" computationally either.

Your implementation "[here]" didn't have a hyperlink, can you share that again? I've seen various Hybrid-A* implementations floating around (very few are properly implemented) but curious if you stumbled on one I didnt.

Sorry, I don't know. I haven't missed it so far, since I haven't tried to control autonomous vehicles at higher speeds yet.

I think that tells me enough if you don't feel the need for it with a 46-ton robot and a "high-speed" industrial robot.

I don't suppose if you know if Magazino would be open to making their SBPL fork public? They have done alot of great work in open-source already and it actually surprises me a little they did a bunch of work there and didn't make that public even if they didn't merge it back in. Having a company like them backing this approach and optimizations to make it tractable (and I'd add more, I'm sure) would be a very motivating use-case to use this. I'm in the process of switching DWB to TEB for a similar reason (it has very more industrial users and clearly is working for them).

Would you consider 112 primatives "alot"? It seems like alot. I'm a little surprised to hear its > 50.

I also hate the Matlab scripts and think they should be replaced.

I'm just hoping to get an intern to do it. Frankly, the matlab script is a big big detractor. I don't want to support or maintain it. I don't know why it requires matlab, but that seems shady (and researchy, partially why I questioned this package to begin with). It sends the wrong signals. Though I'd really need someone else to do that. I think I'd pull my hair out if I had to do that myself.

Making the grid coarser speeds things a lot w/o affecting much subsequent search-based planning.

How fine were people searching with? I'm imagining projecting the robot forward, lets say, 100ms each iteration. At 0.5m/s that's 5cm, which does seem pretty dense, I wouldn't do that in practice. I'd instinctively bump up that to 10cm for smaller robots (turtlebots, small service robots) and probably 20-30cm for larger ones (industrial / logistics robots).

Any other words of advice for speed ups?

Btw, @SteveMacenski, great job with Nav2!

Thanks! In my mind, we got to a point in the infrastructure, now its time to upgrade the algorithms. In 2 years time, I don't even want to hear the words "dwb", "navfn" or "costmap_2d". Lets get out of technology circa-2010. Not that Hybrid-A* and Lattice are all that much newer, but at least they're still used in practice by pretty serious people.

mintar commented 4 years ago

Its very common for robots to be in 30,000 m2 spaces, I consider that to be the target size with an eye of 80,000m2 in the future with effort.

Absolutely. On the MiR robot, I've used maps up to 25,000 m^2 with SBPL myself. Planning from one corner of the map to another takes a while, but it works.

Intelligent caching helps a lot.

That is a "future goal" type of thing in my mind. I'd like to try as hard as I can to get things working such that that isn't necessary, but would be nice. that kind of breaks a bit of my dynamic environment theses above.

Sure, you need to check whether the cached path is still valid (due to dynamic obstacles). But I believe even having an invalid path from start to end helps a lot (it should be much faster to repair an invalid path than to plan from scratch).

Can you give me a feeling for how much time it takes you to compute paths of a certain length (Nothing too scientific, just orders of magnitude). 200ms for a poorly defined "long path" is kind of the middle-ground of what I would accept. I was targeting 100-300ms.

I've just recorded a video for you yesterday, so you can judge for yourself (the video is sped up 4x though):

https://vimeo.com/394184430

The two longer paths (at 0:21 and 1:08) both are 20ish meters long and both took around 10s to compute (on my 4 year old laptop with Gazebo eating up most of the CPU power). The shorter paths were quicker to compute. I've used the following parameters for the video:

allocated_time: 10.0
initial_epsilon: 15.0

SBPL is an anytime algorithm; it seems that in both cases it aborted at the 10 second allocated_time limit and returned the best path before exhausting all possibilities. I didn't fine-tune the parameters; I probably could have gotten paths of comparable quality quicker by reducing those parameters further. However, I don't believe you will get 100-300ms with the current implementation.

@corot : In your experience, what planning time can be achieved by SBPL for a "reasonably long path" when you include all the optimizations? Preferably without caching, because that would distort the results.

The zig-zag and 22.5 degree problem are things I've thought about and one of the major concerns of mine for Lattice. I alluded to some formulation from my understanding of the literature that this aligns with. From just a "ball park", I'd say that to make this the recommended planner, I'd have to have the resolution at least another division to 11.25. 22.5 degrees would be insufficient for logistics I imagine. 11.25 is on the same range as the default parameters for orientation tolerance in the nav stack. Still in that 100-300ms timeframe, ideally. The idea being that the idea of the orientation divisions are low enough that people functionally dont need to worry about it or even know about it.

The 22.5° steps bother me less than you'd think; you usually don't notice them unless you go along some really long corridors or across a huge free area. A hack that works for most buildings is to rotate the map such that all corridors go in the north-south or east-west direction. :)

Hybrid-A* also contains an CG optimization step to smooth out that path. I'm thinking now that this is probably a step I should make a general ROS solution to because its come up a few times with SBPL/OMPL/etc. That doesn't come "free" computationally either.

It's definitely worth thinking about smoothing the paths. However, it's sometimes hard because many "kinks" in the path are there for a reason, and "cutting corners" leads to problems a few meters further along the path.

Your implementation "[here]" didn't have a hyperlink, can you share that again? I've seen various Hybrid-A* implementations floating around (very few are properly implemented) but curious if you stumbled on one I didnt.

Ooops, sorry. I'll edit my comment above. Anyway, it's here: https://github.com/karlkurzer/path_planner

The implementation seems to be based on a master's thesis, so I wouldn't expect it to be extensively tested on many different robots. I just linked to it because I really liked the videos.

Sorry, I don't know. I haven't missed it so far, since I haven't tried to control autonomous vehicles at higher speeds yet.

I think that tells me enough if you don't feel the need for it with a 46-ton robot and a "high-speed" industrial robot.

Well, the MiR100's top speed is 1.5 m/s (5.4 km/h), so a brisk walking speed. You should probably get input from other people as well before dismissing the idea based on my experiences alone.

I don't suppose if you know if Magazino would be open to making their SBPL fork public? They have done alot of great work in open-source already and it actually surprises me a little they did a bunch of work there and didn't make that public even if they didn't merge it back in. Having a company like them backing this approach and optimizations to make it tractable (and I'd add more, I'm sure) would be a very motivating use-case to use this.

Maybe the prospect of making it the default ROS2 planner would be an incentive for them to think about it again? @corot doesn't work there any more, but perhaps he can recommend somebody at Magazino to talk to.

I'm in the process of switching DWB to TEB for a similar reason (it has very more industrial users and clearly is working for them).

TEB is definitely cool, but I've always had problems making it run fast enough on a 0.0025 - 0.05 m resolution map. How did you solve that?

I've just recently switched from Eband to DWB. With DWB and some custom critics plugins I'm quite happy. The one thing I had to change is to split the path into segments (i.e., consider each point in the path where the robot switches driving direction as a separate goal); perhaps that's something to consider for ROS2 navigation in general? It makes the local planner's job much easier and prevents it from "cutting corners" at the turning points.

Would you consider 112 primatives "alot"? It seems like alot. I'm a little surprised to hear its > 50.

I'd say it's normal. There are 7 primitives for each of the 16 angles (and 7 * 16 = 112), roughly:

  1. 0.05 m forward
  2. 0.40 m forward
  3. 0.05 m backward
  4. 0.40 m forward and turn 22.5° left
  5. 0.40 m forward and turn 22.5° right
  6. in-place rotation 22.5° left
  7. in-place rotation 22.5° right

These are the primitives (the number at the end is the final yaw angle, in units of 22.5 °):

sbpl_motion_primitives

Maybe giving the total number of primitives as 112 was confusing; the branching factor of the search isn't 112, but 7 (since only those primitives that match the current yaw angle of the robot are applicable).

I also hate the Matlab scripts and think they should be replaced.

I'm just hoping to get an intern to do it. Frankly, the matlab script is a big big detractor. I don't want to support or maintain it. I don't know why it requires matlab, but that seems shady (and researchy, partially why I questioned this package to begin with). It sends the wrong signals. Though I'd really need someone else to do that. I think I'd pull my hair out if I had to do that myself.

Don't worry about it. I've just added a Python version of the Matlab script that produces the mprim file I'm using and plots the graphic I've pasted above:

https://github.com/dfki-ric/mir_robot/blob/1b62c5191fb52df7f29db65f29de69fb40a05040/mir_navigation/mprim/genmprim_unicycle_highcost_5cm.py

Now it's just a question of making it less Matlabic and more Pythonic, and make it configurable via parameters.

Making the grid coarser speeds things a lot w/o affecting much subsequent search-based planning.

How fine were people searching with? I'm imagining projecting the robot forward, lets say, 100ms each iteration. At 0.5m/s that's 5cm, which does seem pretty dense, I wouldn't do that in practice. I'd instinctively bump up that to 10cm for smaller robots (turtlebots, small service robots) and probably 20-30cm for larger ones (industrial / logistics robots).

That will speed things up immensely, but it also means that you can't go through narrow passages if there is less than 10 cm on each side. The MiR uses 0.05 m map resolution internally for the global costmap + SBPL.

Any other words of advice for speed ups?

That's a question for @corot , I believe.

Btw, @SteveMacenski, great job with Nav2!

Thanks! In my mind, we got to a point in the infrastructure, now its time to upgrade the algorithms. In 2 years time, I don't even want to hear the words "dwb", "navfn" or "costmap_2d". Lets get out of technology circa-2010. Not that Hybrid-A* and Lattice are all that much newer, but at least they're still used in practice by pretty serious people.

Totally agree, and congratulations on Nav2 from me too!

SteveMacenski commented 4 years ago

Planning from one corner of the map to another takes a while, but it works.

I guess that "a while" is something I need an order of magnitude on. I know either way, this or hybrid-A*, there's going to be work involved in optimizing the runtime speeds. That doesn't scare me too much as long as its "reasonable", which is just as illdefined as your "a while" :wink: You're also saying 10s for 5cm resolution. I wonder if I bump that up to 10 cm (oh, the insanity!) how much faster that is in practice. I would consider 1 second to be short enough to optimize to get down to <500ms, but 10s sounds a bit unrealistic for use in the nav stack for mobile robots.

A hack that works for most buildings is to rotate the map such that all corridors go in the north-south or east-west direction.

I think that's fine for an application, but less fine if that's needed for the necessary toolset.

Ah, I've seen that thesis project. Its one of the more complete ones, but the code isn't particularly... great.

perhaps he can recommend somebody at Magazino to talk to.

That would be appreciated.

TEB is definitely cool, but I've always had problems making it run fast enough on a 0.0025 - 0.05 m resolution map. How did you solve that?

So I haven't had an issue and my usual default size is 0.05m. I havent tried 2.5 mm, but I think you mean 2.5cm ;-) The default builds also don't built in release mode, I was tricked by that the first few times I used it. If you actually build in release mode it speeds up quite a lot (100x)

I will need to dissect the rest of that message next week.

I think the best course of action here is to get my hands dirty and see what comes out on the other side. I think that means creating a viable attempt at a hybrid-A* planner and also working on this lattice planner and seeing where I land.

SteveMacenski commented 4 years ago

Maybe giving the total number of primitives as 112 was confusing; the branching factor of the search isn't 112, but 7 (since only those primitives that match the current yaw angle of the robot are applicable).

To dissect that a little: so if you have 16 angles, then only the 7 primitives on that discretized angle are used (ei if at 45, it uses those samples, 22.5, those samples, etc)? If that is true, why isn't the above graph symmetric every 22.5 degrees? It looks to me like the N/W/E/S are forward with an elogated curve for a turn, the 45s are turning faster, and the 22.5s are short and wide. That would seem to me to make it favor the map oriented N/W/E/S directions away from the other angles. What's the benefit that derives?

Python

Oh awesome. With one example I can do the rest I'm sure. I was thinking about having a nice setup GUI made like the MoveIt setup assistant. Select your base type, select your angles, visualize it, ok? save. Maybe even a few sample plans of a random space to get a feel for it. I'm less concerned with making it "pythonic" as much as having access to the python tools (and not needing to mess around with octave).

That will speed things up immensely, but it also means that you can't go through narrow passages if there is less than 10 cm on each side. The MiR uses 0.05 m map resolution internally for the global costmap + SBPL.

So its 0.05m, but your primitives only have a couple of options at 0.05m, many of them are 0.40m. You mention that increasing to 10cm would be high, but you're functionally already sampling ~40% of your motions at 40cm already. I think what I'm proposing is that you still have a 5cm costmap but it projects motions at 10cm increments as the low for search instead of 5cm (and maybe still 40cm for the longer ones). I don't see a reason why a sampling method needs to have the resolution of the costmap be the same as the resolution of sampling. Skipping some should be OK if you're already skipping 8 for the 40cm resolution primitives (and I hope maybe some interpolated collision checking between... to make sure you don't clip obstacles maybe?).

I think my plan will be to create a hybridA as I said above, and then circle back to this. I think there's reasons to use one or another and maybe it makes sense to have both available. Many of the optimizations that could be made for one would be relevant for both. I definitely hate redundant work, but this feels like a "I just need to get dirty with them both to make an informed decision" move. My main concerns with Lattice is the graph alignment (which an optimizer could help remove), time to plan (which hybrid-A will likely also have), lack of ability as you mentioned before to find plans on occasion. I think all are solvable. Certainly if Magazino were to open-source their updates and it speeds things up quite a bit, that would be a massive push for me to probably drop Hybrid-A* and move my efforts only to this if there's some company backing.

mintar commented 4 years ago

To dissect that a little: so if you have 16 angles, then only the 7 primitives on that discretized angle are used (ei if at 45, it uses those samples, 22.5, those samples, etc)?

Correct.

If that is true, why isn't the above graph symmetric every 22.5 degrees? It looks to me like the N/W/E/S are forward with an elogated curve for a turn, the 45s are turning faster, and the 22.5s are short and wide. That would seem to me to make it favor the map oriented N/W/E/S directions away from the other angles. What's the benefit that derives?

It can't be symmetric, because you cannot rotate the motion primitives by an arbitrary angle. In order to form a lattice, the motion primitives must obey two conditions:

  1. The end position must be on a grid cell that's a multiple of the lattice resolution (here: 0.05).
  2. The end yaw must be of one of the 16 angles.

Maybe this figure makes is a bit clearer (I've added a better grid):

sbpl_mprims

The intermediate poses are then just interpolated between start and end according to the unicycle motion model. I believe these motion primitives were just the best approximation of a symmetrical arrangement under those conditions that they could come up with. IMO, the NW/NE/SW/SE directions look about the same length as the N/W/E/S directions. The reason why the NNW/NNE/... directions are shorter is probably because in order to get 22.5°, the "straight line" motions must be one of the following vectors times 0.05: (2, 1) / (4, 2) / (6, 3) / (8, 4), so they just picked (2, 1) for the short one and (6, 3) for the long one, and picked the curved primitives so that they roughly match in length:

figure_2-1

I don't know if this biases the search towards the other 8 directions (it probably does), but the final plans are not affected by these primitives being shorter.

Python

Oh awesome. With one example I can do the rest I'm sure. I was thinking about having a nice setup GUI made like the MoveIt setup assistant. Select your base type, select your angles, visualize it, ok? save. Maybe even a few sample plans of a random space to get a feel for it. I'm less concerned with making it "pythonic" as much as having access to the python tools (and not needing to mess around with octave).

Sounds great! I would have just gone with a simple command line UI (stuff like --no-sideways, --backwards_cost_mult etc.) . Maybe I'm even going to implement that. But your tool would be much more awesome!

That will speed things up immensely, but it also means that you can't go through narrow passages if there is less than 10 cm on each side. The MiR uses 0.05 m map resolution internally for the global costmap + SBPL.

So its 0.05m, but your primitives only have a couple of options at 0.05m, many of them are 0.40m. You mention that increasing to 10cm would be high, but you're functionally already sampling ~40% of your motions at 40cm already. I think what I'm proposing is that you still have a 5cm costmap but it projects motions at 10cm increments as the low for search instead of 5cm (and maybe still 40cm for the longer ones). I don't see a reason why a sampling method needs to have the resolution of the costmap be the same as the resolution of sampling. Skipping some should be OK if you're already skipping 8 for the 40cm resolution primitives

With the supplied motion primitives, it's still possible to reach any pose on a 5 cm grid (see above). It's already possible now to simply modify the motion primitives so that they all end up on a 10 cm grid, even though the costmap is still 5 cm; however, that would mean that between two possible parallel paths, there's a 10 cm gap, which might be too much for narrow passageways.

(and I hope maybe some interpolated collision checking between... to make sure you don't clip obstacles maybe?).

Yes, SBPL does that. The dots along the motion primitives in the plot above are intermediate poses. During initialization, SBPL collects a set of all costmap cells that the footprint intersects at any intermediate pose and subtracts the footprint at the motion's starting pose from that set (since it's already checked by the previous motion that got the robot there). During runtime, SBPL uses the set as a mask for collision checking.

I had to look it up myself, because the 0.05 m motions have the same number of intermediate poses as the long ones (in my case) and I was worried that this was causing extra overhead. However, since SBPL uses a set, it's not causing overhead during runtime.

I think my plan will be to create a hybridA as I said above, and then circle back to this. I think there's reasons to use one or another and maybe it makes sense to have both available. Many of the optimizations that could be made for one would be relevant for both. I definitely hate redundant work, but this feels like a "I just need to get dirty with them both to make an informed decision" move. My main concerns with Lattice is the graph alignment (which an optimizer could help remove), time to plan (which hybrid-A will likely also have), lack of ability as you mentioned before to find plans on occasion. I think all are solvable. Certainly if Magazino were to open-source their updates and it speeds things up quite a bit, that would be a massive push for me to probably drop Hybrid-A* and move my efforts only to this if there's some company backing.

Whatever you do, as long as there's an alternative to naive grid search as the default planner for Nav2, I'm all for it!

If you decide to reimplement lattice-based planning, make sure to check out the orunav_motion_planner that I've mentioned before. It's another implementation of that same concept (but with support for multi-robot planning).

SteveMacenski commented 3 years ago

7 months later....

The new Hybrid-A* planner is now open for alpha testing: https://github.com/SteveMacenski/navigation2/tree/nav2_smac_planner we have 2 more PRs to be in beta (where release just needs testing / documentation / additional software optimizations)

So if you have a tractor (@droter), a RC car, a robot running at speed or a car (@JWhitleyWork) you may think this is interesting. Some recent tests from the Willow Garage world trying to plan over 60 meters away across many rooms.

1. 4211 ms with 991,000 expansions
2. 2157 ms with 45,000 expansions
3. 202 ms with 55,000 expansions
4. 76 ms with 9,900 expansions
5. 314 ms with 81,000 expansions
SteveMacenski commented 3 years ago

Merging impending into Navigation2 https://github.com/ros-planning/navigation2/pull/2021

We've implemented a bunch of optimizations so we're now running about 20x faster than those metrics above. Typical 85 meter complex map paths are taking 30-60ms. Very infrequently do we have thing that take > 100ms, let alone 1 second.

mintar commented 3 years ago

I already applied on the discourse thread, so I'm keeping it brief here. But let me say again: Good job!

SteveMacenski commented 3 years ago

@mintar @corot we're also looking at using the smac planner framework to implement a state lattice planner which would have all these great speed ups as well. Is this something you might be interested in being involved with? Having a robust, fast, state lattice planner in ROS2 will be useful when you organization eventually has to transition to ROS2.

It would be remarkably little work because of the templated search algorithm requires no changes for this. Its only 1 Node template and the generation scripts

mintar commented 3 years ago

I don't have any plans on doing that right now, but I'll keep it in mind. I think my use cases are already covered by the current smac_planner implementation. If I need faster state lattice planning in the future, I'll definitely rather work on adding it to the smac_planner than spending any more work on SBPL. Also, I'll point people your way if it comes up!

corot commented 3 years ago

Same here, I'm currently very busy, but I'll keep in mind for mid-term, in case nobody else does the task.

mkolodziejczyk-piap commented 3 years ago

I've shared my port to ROS2 in case somebody wants to test it. It's a bit rough now, but should work. I plan to add @mintar python script for motion primitives generation: https://github.com/mkolodziejczyk-piap/nav2_sbpl_lattice_planner/tree/eloquent-devel/nav2_sbpl_lattice_planner

ADI201998 commented 3 years ago
  1. SBPL is slow. Some ways to tackle this:
  • In intralogistics applications, the robot often plans the same path between the same start and goal poses over and over again. Intelligent caching helps a lot.
  • Adaptive management of the initial_epsilon parameter (see above).
  • SBPL often spends > 90% of the search time on the final approach; people have dropped hints that there are ways to optimize this.

I've heard that people achieve 200ms average planning times using all those optimizations, but I don't know if that number includes caching, so take it with a grain of salt.

Hello @mintar, I was wondering about the methods used to achieve this, as even I was looking into speeding up SBPL.

ADI201998 commented 3 years ago

This is another sad case of an interesting project dying of lack of maintenance. A couple of remarks:

  • SBPL (the library, not the lattice planner) code is, imho, rather terrible, and performance can be massively improved (see for example my PR above mentioned)
  • A lot of computation is expended calculating the heuristic for the search. Making the grid coarser speeds things a lot w/o affecting much subsequent search-based planning.

Hello @corot , I was looking into speeding up the planning part of sbpl and was wondering about any techniques to do the same. Thank you.

corot commented 3 years ago

I've shared my port to ROS2 in case somebody wants to test it. It's a bit rough now, but should work. I plan to add @mintar python script for motion primitives generation: https://github.com/mkolodziejczyk-piap/nav2_sbpl_lattice_planner/tree/eloquent-devel/nav2_sbpl_lattice_planner

Just curious: why porting instead of using the new Hybrid-A* planner?

corot commented 3 years ago

Hello @corot , I was looking into speeding up the planning part of sbpl and was wondering about any techniques to do the same. Thank you.

Probably some already part of the new Hybrid-A* planner, but some things I tried are:

SteveMacenski commented 3 years ago

As an update, I will shortly push some updates that speeds up collision checking of the full footprints significantly, making it only ~2-10% slower than checking only center costs alone for a circular modeled footprint. There are some improvements to be made to even further improve the performance of Hybrid-A* but alas my OpenMP skills are either rusty or parallel processing in this context isn't helpful. Either way, we're well under 1s for any reasonable query at a warehouse scale (looking at ~30-400ms typical range depending of complexity of the request). If anyone's a planning optimization or OpenMP wizard, I'd thank thee. Any bit of improvement here impacts a very large group of users. If we get it fast enough, we can drop naive grid search entirely. (and I don't think we're far from that)

I'm starting to think about how I want to support kinematically feasible planning for non-circular differential and omnidirectional robots in ROS2. Clearly both can use Hybrid-A, but it won't make full use of the drivetrain's capabilities. One of the methods on the top of my list is still state lattice planners, but I'm still concerned about the ~112 branching factor from each pose. Any feedback on this would be great as I'd like to get started prototyping next week (state lattice or something else if necessary) to include in Nav2 alongside Hybrid-A. Re-reading this makes me think that maybe its not actually 112 but 7 at each pose? That could make sense for Differential, but what about omni? Wouldnt you have all of those options available at all times?

My knee-jerk thing I wanted to try was just an increased primitives Hybrid-A model, not not very inspired but figured it might be faster. The idea being rather than 3 motion primitives for Hybrid-A, there are N where N is sufficiently bounded (like 10-20) distributed evenly across the differential drive / omnidirectional potential motion range and search this way. There are some implementation logistical issues that that brings up but are solvable. This isn't very "formal" and I'd rather create a variant of a more tried algorithm or just implement it verbatim if its fast enough and does the job. It also sounds like this option isn't far off from what @mintar brought up in this comment if I'm now reading it correctly (and before I definitely misread)


Maybe giving the total number of primitives as 112 was confusing; the branching factor of the search isn't 112, but 7 (since only those primitives that match the current yaw angle of the robot are applicable).

Maybe I missed this last time we talked -- so you're saying that the 112 is all of the possible motions ever, but at any given search box, you only use the 7 that apply to your current orientation?! Oh boy, have I been thinking about this wrong for over a year. What if its omnidirectional? Would all 112 be valid then? Would the state lattice graph it show be much further reduced in the number of total primitives? What is the typical branching factor at a given pose for a state lattice planner? I was under the impression it was hundreds, if its only 10s, that changes my entire outlook on this algorithm :laughing:

I wouldn't have represented 7 primitives per-search step just rotated to N orientation bins this way, it would never have occurred to me in isolation that that's what this might mean. I suppose that's the danger of doing this all in open-source, I leave myself a bunch of opportunity to not only be wrong, but to be very publicly wrong -- whatever, I'd rather be getting things done than being overly concerned about my right-ness :smiley:

mintar commented 3 years ago

so you're saying that the 112 is all of the possible motions ever, but at any given search box, you only use the 7 that apply to your current orientation?!

Yes, exactly. I think now you've got it. That'll teach me not to write walls of text in the future... :)

In this comment, I also posted a figure showing the 7 motion primitives for a single angle. It's a bit hard to see; every motion primitive has a number (representing the resulting yaw angle in 22.5° steps) at the end point. The two in-place rotations are unfortunately printed on top of each other at the current pose of the robot (the coordinate origin).

What if its omnidirectional? Would all 112 be valid then?

No, you'd never model it that way. In the sbpl matlab mprim generation scripts, they have different models:

  1. unicycle (5 prims per angle): slow forward, fast forward, slow backward, forward turn left, forward turn right
  2. unicycleplussideways (9 prims per angle): same as (1), plus: turn in place clockwise/counterclockwise, move sideways left/right
  3. unicycleplussidewaysplusbackturn (11 prims per angle): same as (2), plus: backward turn left/right
  4. unicycleplussidewaysplusbackturnplusdiag (13 prims per angle): same as (3), plus: forward diagonal left/right

Of course, you can model anything else that you want; in my omnidirectional robot, I'm using a variation on (1) plus "turn in place clockwise/counterclockwise".

The "sideways" and "diagonal" primitives maintain the yaw angle, so (2)-(4) only make sense for omnidirectional robots.

As you can see, the branching factor is at most 13, even for omnidirectional robots, depending which model you want to use.

mintar commented 3 years ago

One thing to add: The motion primitives above don't represent the full range of motions of an omnidirectional platform. For example, sideways motions while rotating at the same time are not included. But the point of using something like SBPL or Hybrit-A* is that you want to restrict the range of motions a bit; for example, maybe you want the robot to prefer forward motions and forward turns (by assigning them a low cost), because you've got a forward-pointing 3D sensor that you want to use for collision avoidance (like the PR2 has).

At the ROS Fall School in Munich in 2010 (the event that kicked off ROS in Europe), the original authors of SBPL showed a cool demo where the PR2 was pushing a cart that had shopping-cart-like kinematics. So the whole robot-cart system was not omnidirectional any more. I think their point was that Dijkstra works fine for the standalone PR2, but SBPL is useful for more complex kinematics.

SteveMacenski commented 3 years ago

Awesome, thanks for the info! I think we can re-close this issue -- I think I have a decent understanding of our path forward. I appreciate the insight and I'll comment back when we have something to play with!

berndpfrommer commented 3 years ago

I am building a robot with Ackermann dynamics and am looking for a ROS2 based navigation framework. I'm coming from the vision side (VIO/SLAM) but have no experience in planning/trajectory generation/control. My goal is to build a robot that can go from location A to location B without bumping into things, outdoors, if possible at high speed (20mph+). The map is not known beforehand but will be discovered as the robot proceeds. Is the ROS2 navigation stack suitable for this purpose, in particular for path planning and controlling an Ackermann vehicle? Would love to help test/debug if the basics are there. Thanks!

SteveMacenski commented 3 years ago

Another fun update on the state lattice progress, Matthew Booker (I believe his github handle is @mattbooker) has been working on generating control sets view the more "official" way with the minimum control set of primitives (rather than SBPL's more hand-tuning methods) for the Nav2 State Lattice Planner varietal. I'm super happy with the results and figured I'd share them here.

I thought @mintar would also enjoy doing a head-to-head comparison of the lattice he's currently using to this one. There are many similarities but also some additional ones that the SBPL misses as minimum set options. We might want to prune the 0 deg and 45 deg sets a little, but I'm super happy with these results. He's able to generate these with any turning radius or grid cell resolution automatically! No other parameters required!

image

SteveMacenski commented 3 years ago

image

Pretty shapes!

mintar commented 3 years ago

Very nice, thanks for keeping us updated!

SteveMacenski commented 2 years ago

image

It's going well, finally have a complete prototype together. As you can tell its a bit..... wobbly. Need to work out tuned gains / traversal functions / heuristic functions that make everything happy. That may take a hot second, but all the pieces are together now and its functional if you're ok with a drunken lad (> 10hz).

As you can imagine, its a bit more complex since each of our angular bins may have more or less than 3 primitives and they vary in length significantly more than the SBPL hand-generated scripts based on minimum spanning trajectories (straights are ~.25, light turns are ~0.4, heavy turns are ~0.6, different angles have different extremeness of turns, etc).

SteveMacenski commented 2 years ago

image

mintar commented 2 years ago

Nice, looks much less wobbly!

SteveMacenski commented 2 years ago

This will be the final update, the Nav2 State Lattice Planner will be merged very shortly. Documentation now available as well: https://navigation.ros.org/configuration/packages/smac/configuring-smac-lattice.html.

The performance is spactacular, it is in line with 2D A from NavFn and Hybrid-A and is able to plan 75m plans in under 150ms in open areas (open takes a bit more compute). I see in more standard warehouse/aisle spaces it takes under 30ms pretty typically. The worst performance I was able to see never exceeded 300ms. I'm very happy overall with the outcome and I hope this would be a nice forcing function to try out Nav2 or consider moving over to it!

berndpfrommer commented 2 years ago

I actually want to use Nav2 for an ackerman type vehicle but I'm still working on the state estimation part. I'm keeping a keen eye on what you are doing!

corot commented 2 years ago

Amazing! great job!
definitely a big boost to Nav2!