Closed carlosluis closed 4 years ago
Please separate this into 2 PRs: once that merges in master and the other that has your changes. This is far too large of a PR for me to review
Sorry about that, I opened a new one just with the merge here.
Note that my change is really only the last commit, the other 82 is the merge from master.
Can you rebase so that there's only the new commit? Trying to separate out your changes from just general updates
Done, the commit is isolated now
I didn't observe any runtime impact on A* after making it templated on the node type. Here's a stat on a simulation with the turtlebot3 (my CPU is a pretty old btw: Intel® Core™ i5-3210M CPU @ 2.50GHz × 4 with 8 gigs of RAM). runtime_templated_astar.txt
Changes were tested using both 4 and 8 connect neighborhoods, no difference noted with the performance before the changes.
@carlosluisg those are far too short of paths to get an understanding of compute time from. I think you should use the willow map https://github.com/turtlebot/turtlebot_apps/blob/indigo/turtlebot_navigation/maps/ and plan across it. That will give you ~30,000 nodes which will be a better indicator of any additional overhead from the templates vs before. Can you run 10 plans from across it with each and compare (mean difference, std)?
You can load this map, localize in a corner of the big room and then plan to one of the sub-rooms across the map. You can still use the bt3 simulator (just nothing will match), just lock the robot down so it doesn't move (remap cmd_vel or something)
Sorry this takes me a while to review, its a pretty hard thing to juts look at and its requiring a bit of thought. This is looking good though
Sorry this takes me a while to review, its a pretty hard thing to juts look at and its requiring a bit of thought. This is looking good though
No worries, thank you for reviewing it. I'll work on gathering more data with the map you pointed and bring back the results for discussion.
Responded to your comments
Here's a summary of the benchmark I ran:
In the Willow garage world I tested creating the following path:
Results (showing average +/- standard deviation runtime for 20 executions of the planner):
Description | Commit | Runtime (ms) | Nodes explored |
---|---|---|---|
A* Not templated | 4e60a95 | 164.61 +/- 9.22 | 67420 |
A* templated (GetNeighbors in Node class) | f876fff | 157.78 +/- 13.35 | 67420 |
A templated (GetNeighbors in A class) | 3d016da | 164.99 +/- 8.53 | 67420 |
Conclusion: Based on my observations, there's no regression introduced by the templated implementation. All stages lie within the same runtime values (considering the standard deviation) and none shown any clear superiority. There was a bit of variability depending on the computational load of my laptop at the time of running, so in reality I cannot say A* templated (GetNeighbors in Node class) was consistently faster than the other 2.
Awesome, no impact, that's what we're looking for :-) Let me review this again and we can figure out next steps
I have no issues left - anything else you want to change or mention before merging?
We have a couple of directions we can go to next:
If my RSS workshop gets cancelled, then I think this is what I'll start also working on again with you to get this ready for prime time
Nothing to add before merging - ship it!
As for next steps: I think a good approach would be to get a version of Hybrid A* running first (i.e., first two bullet points you mention) and then work on the optimizations (last 3 bullet points). Both streams can be parallelizable as well.
For now I will focus on the 3D node and using it within A* to get the non-holonomic compliant paths.
I think the object for downsampling the costmap would be a good starting point. Other hybrid A demos (https://github.com/karlkurzer/path_planner) just assume you have some silly level downsampled map you're working with. For some cases, it may make sense to have the costmap be 1m resolution or something when using hybrid-A, but it could also make sense to have higher resolution of the underlaying map if you use multiple planners and hybrid-A* is the only one that needs a coarse map to work with.
What I imagine this object being is a new class we add to the ROS plugin that we give it some costmap and the intended resolution, and outputs a new costmap at the downsampled resolution that we then feed into hybrid-A*. It might be worth doing this first so that when we get to sampling on motion models / orientations, we have something tractably solvable.
An question is how to down sample: if we have a 5cm resolution -> 1m resolution, then there's a 20x20 grid of cells now in the 1 cell of the downsampled map. Should we do max cost, average cost, min cost, some other statistical derivative / anti-aliasing technique? Max is intuitive starting point, but curious if you had thoughts.
What I imagine this object being is a new class we add to the ROS plugin that we give it some costmap and the intended resolution, and outputs a new costmap at the downsampled resolution that we then feed into hybrid-A*. It might be worth doing this first so that when we get to sampling on motion models / orientations, we have something tractably solvable.
I think it does make sense to work on the downsampling first. It will be easier to test now rather than later when we have a more complex node and algorithm in place. I will start working on that, and it will be nice to understand the impact of downsampling in the current state of the code, so I can do a short benchmark like for this PR with different sampling settings.
As for the cost, max is indeed intuitive because it's equivalent on considering the worst-case scenario, and it should probably be our first approach. It could be a bit conservative in the case a big cell is only partially high-cost (say, only 1-2 cells outs of the smaller 20x20 grid have high cost), since we would then consider the whole big cell to be high cost. There's another option (albeit more complicated) that could bring the best of both worlds: we keep a coarse grid (e.g. 1m) for planning and a fine grid (e.g. 5cm) for costs. The idea is that when we do node expansion, we check that the motion primitive taking us from the current node to the neighbor, when projected into the refined costmap, does not pass by any blocked / prohibited cell. This gives a more fine-grained obstacle avoidance and also potentially more optimal paths.
different sampling rates would definitely help performance, but hard to benchmark until we have the other hybrid-node-stuff done too. Though we could test with different rates with the 2D A* that we have right now.
This class is also easily testable since its a composed object. When I started the work, I wasn't too concerned with test coverage but I am now. Not something you have to think about right now, but its on my list to start with. Before we merge this into master, we need at least 60% coverage. I'll probably take care of most of it.
Basic Info
Description of contribution in a few bullet points
Coordinate
type, but for Hybrid A* we would need to add the orientation of the cell represented by the node.GetNeighbors
functionality to this templated node classDescription of documentation updates required from your changes
Future work that may be required in bullet points
GetNeighbors
method does not filter neighbors based on cost (as it did before). Also, it feels weird calling a node to get its neighbors by passing the graph. My plan is to try separating some functionality into a properly templatedGraph
class, instead of it being just a vector of Nodes, so that we can doI think this would help moving forward when we start using the Node with coordinates + orientation.