Closed SteveMacenski closed 1 year ago
@rotu is this something you guys at Rover would be interested in working on? It would be both a good advertisement for your platform and show using navigation2 and robot localization to accomplish some outdoor GPS guided task (without or with a pre-generated map). At that point write up a little Nav2 website page about it / tutorial?
I have alot of toys laying around, but this is the antithesis of the toys I have (all robots differential indoor, no GPS, but tons of lasers, but no map required...)
Some additional notes:
I am interested in this. Preferably without SLAM initially. I do not have hardware on my hand for now(GPS or LIDARS). I will have in future though.
At the moment, I rely on gazebo_plugins
to decorate my robot with required sensor suite. I do have custom created ackermann drive robot with a nice outdoor gazebo world that I copied from the gazebo_models
.
I am determined to dive deep in this however I had better have someone that can collaborate with me, @SteveMacenski maybe you could summarize the high level architecture of classes/packages to realize a suitable integration of this with navigation2
For completeness from a post on the Slack, the 3 demos I had in mind are as follows:
What might be your timing on hardware? This task is 3 fold: creating something that works doing this, documenting setup and process to enable it, and demonstrations that its working. It should be possible given the state of the stack now to do it, its mostly about documenting it so users have it as a 'feature' and showing it work in the real world in those 3 primary situations (though you personally aren't required to do all of them, I'd just like those 3 to be documented at some point)
The only package you need to use for this is navigation2 and robot_localization
All 3 demos are on my list , though like I said, I had like to move slow and clear . Let's focus on the first demo for now.
What might be your timing on hardware?
Say 2-3 months from now. I am quite confident that if we make things work within simulation, it will take little time to test on real hardware(I will have access to a ready to roll outdoor robot). About making demo public(videos etc.), I will confirm with my affiliation but I guess it will be possible.
creating something that works doing this,
Do we have any diagram/scheme of components that are going to be involved to do first demo ?
I am aware that people do outdoor navigation by tricking
navigation
to use an empty map,
Then following a series of conversions
GPS raw latitude,longitude ==> UTM ==>x,y in map frame
Finally setting and navigating to this x,y
as a goal with navigate_to_pose
action. Is this the best we can do here ❓
Please correct me if I am mistaken, with the current navigation2
we will at least need to include a Life-cycle Node that will read waypoints and do the above conversions and finally execute them one by one.
It should be possible given the state of the stack now to do it,
Almost , but I think at the moment we need the above process, but I agree there isn't much lacking for navigation2
to do this.
After we have the demos on hand , documenting isn't that hard, Though documenting it to where ? navigation.ros.org
?
Got it - yes documenting on navigation.ros.org
If you were running SLAM, you'd have SLAM publishing a odom->map transform that nav uses to position. If you had localization with AMCL, the same thing. Basically, all you need to do is replace the TF publisher for that transform from a "lidar thing" to a "gps thing".
That "gps thing" is robot localization. It has something called the navsat transform that takes in IMU / GPS data and provides a map-framed pose. Then with an instance of the robot_loclaization EKF, we smooth that and publish the appropriate transform. See 'dual navsat ekf' config files in robot_localization for examples of that (1 EKF for odometry, 1 EKF for GPS smoothing, 1 NavSat for processing GPS fixes).
Poof, things should work. Then the question on costmap_2d is around how to size it. Typically, lidar slam will give you a static map which sets the size for the map for use in costmap_2d. Now that you don't have a static map setting that size, you have to use the width / height / rolling parameters (depending on which demo we're doing at the time) to set the size of space for the robot to operate in / roll around. Then the rest of things work exactly the same, the obstacle/voxel layer will process obstacles in the scene into the costmap for use in planning / control to get you to your goals. The inflation layer inflates. Just now there's no static layer for the case of no-SLAM so you're having to navigate based solely on the measurements of the environment.
I configured the robot_localization
for the simulated robot with the GPS(Not that perfect if robot is climbing on a hill) using the dual EKF
navsat_transform_node
.
Also see this PR https://github.com/cra-ros-pkg/robot_localization/pull/611 in robot_localization
So I have holy chain of map -> odom -> base_link
now.
my Rough thoughts on costmaps are;
for the global_costmap, set the width
height
, resolution
to some very large int
after that I think we should not be worrying about the global_costmap
.
For the local_costmap, something like (40,20,0.2) respectively for width
height
, resolution
and rolling_window
enabled should be good enough.
should this local_costmap be at odom
or map
frame ?, robot_localization suggests that we execute local path plans and motions in odom
frame, as that is the continuous transform. But I think I need to hear more on this to spot right frame to be used for costmaps.
Well, you don't want it to be something hilariously large because then you might have to update that hilariously large thing or have a massive strain on your memory. Since the static map essentially sizes you costmap for your application, this is now a designer parameter for you to set it based on your application's needs.
If, for instance, your application has waypoints that are 10 cm from each other, then your costmap really doesn't need to be very much larger than your local costmap, just to be able to get around obstacles in the way. Maybe 10x10 meters, but that's just a guess. The aims of your local costmap haven't changed, so I'd keep that around the same size / resolution as your typically would for a comparable SLAM application.
Local costmap should be in odom. The real question here is if the global costmap should be in map or odom frame and whether it should be rolling. It all depends on your waypoint distance and needs. Can you highlight for us what you're actually trying to do that falls in line with this demo? See above for my 3 canonical examples, do any of those describe your aim (and if not, in those terms, what is your aim)?
I have started from zero to integrate and adapt robot_localization
, navgation2
and the robot model
itself. So it took quite some time. But now I have all these 3 components doing what they supposed to do and have all essential things set up to work together.
From the moment I am writing this , I am looking into clarifying costmaps , their sizes and frames deeply and I will update here with my findings on what parameters will work best for the first use case(Navigating dense GPS points).
I think 10 cm would be too dense that so I am thinking of 1m between each waypoint.
The real question here is if the global costmap should be in map or
odom
frame
my current thought is that better be at map, as GPS wouldn't accumulate slips/shifts from wheel causing odom to shift over time.
Can you highlight for us what you're actually trying to do
The task I am trying to do here is make the robot follow a given set of GPS waypoints in a outdoor field. The robot is aimed for farming tasks such as plant treatment with UV light or grassing moving etc..
the routes will be given in forms of GPS waypoints,and will be drawn by a human. This will correspond to the first use-case I think. But this is just the initial task. As we have basic functions like GPS waypoint following then the other use-cases will be on the list
Edit;
I did an initial waypoint test, though it isn't GPS waypoint yet. Just set the waypoints with usual RVIZ plugin to check response.
very short video here;
I will write a node to take in GPS waypoints, convert them to map frame and call on FollowWaypoints
to the same test.
A few other things;
I had to keep global costmap also in odom
frame, as the map
frame is shaky and discrete, I will still need to play with robot_localization
parameters.
In the test
global costmap ; I set the size quite large but with poor resolution(1000 , 1000 , 2 -> height , width, resolution )
so it didn't(hopefully) occupy large chunks in memory, also with its layers disabled.
The reason is I would like the reachabilty
of the robot to be large for a given GPS waypoint,
_local costmap_ ; for obstacles and local paths I rely to meaningful sizes of local costmap. Now they are (10 , 10 , 0.1 -> height , width, resolution )
But like you said these, these params depends the density and distance requirements for GPS waypoints for the use-cases.
Any questions there? Its good to hear your process but just making sure you didn't expect some specific response :-)
What's wrong with map frame? It can be a little jumping, that's fine. 1 meter distance between pts is also pretty dense. It's just not dense enough that you could send the waypoints as the path to the controller directly so that there wouldn't largely be a need for a path planner.
In simulation environment the GPS waypoint following is now working(with pure GPS pointslong, lat,alt
). It is on flat gazebo worlds, I haven’t test on uneven hilly worlds.
Thanks to
https://github.com/cra-ros-pkg/robot_localization/blob/79162b2ac53a112c51d23859c499e8438cf9938e/src/navsat_transform.cpp#L114
I could easily transform GPS waypoint to map frame and navigate through them.
ATM there is planning between waypoints. Waypoints are no less than 1m.
However In the first use case it mentions;
Navigating dense GPS points (I don't know, lets say every 10 cm or ...
It’s So dense that it won’t require a global planning hence directly feeding the waypoints to controller.
So maybe what I have done so far falls into second use-case that you mentioned above? As there is planning in between.
If that’s the case I could create denser GPS waypoints and call uponFollowPath
, but this will I guess require the source of GPS waypoints to be somewhat continuous as the controller will be quick to consume waypoints and expect new ones. Any comment on how to deal with that ?
One thing in my mind is take in all waypoints to program and put them into a queue and execute them consequently.
I updated some of the descriptions, I think that I didn't add enough / correct information. 100% my bad. I agree, demo 1 needs to be better defined in intent. I updated 2 and 3 to be better defined. 2 was to mean that it is a reasonable sized environment, so we can have a SLAM map augmenting GPS (demo 2 is tl;dr GPS + SLAM or GPS + map). 3 was to mean its so large, we can't possible map it nor can we have a fixed-sized costmap since its so large (demo 3 is tl;dr GPS + rolling costmaps).
So I think to close the loop on all the options, demo 1 would be GPS + fixed-sized costmap and no SLAM (tl;dr GPS + fixed sized costmap: the most basic non-SLAM, non-large, known-sized space demo). I don't know what I was trying to get across with the density of points. I think separately of these 3 situations there's a question of waypoint density. By that I mean
As you can tell, those concepts of waypoint density and the situations involving rolling costmap / SLAM in conjunction of GPS are totally decoupled and unrelated. Lets just ignore the density discussion for right now. The "dense" situation is also where we want to follow a dense waypoint defined route, not waypoint following (stopping at each). That's an entirely different task that is probably actually a new planner plugin. I added that separate topic of a GPS path following to the algorithms ticket https://github.com/ros-planning/navigation2/issues/1710. This is separate of this ticket which is just GPS waypoint following.
Sorry for conflating those concepts. I dont think that was well flushed out in my mind until now.
No worries, at the current progress, the definition of 3 these demos hasn't effected any efforts. The demos involving SLAM will be later on my side, at the moment I aiming navigation into the wild, it doesn't help to do SLAM because of the size and characteristic of area that I would like to cover. But there will be SLAM on the upcoming stages, and that is after I have GPS waypoint following set up reliably. Right now it works as best effort.
If we leave the density of GPS points aside, I guess that what I have so far falls into 3rd use case then. (Because no SLAM + rolling costmaps+ Pure GPS waypoint following in the wild) Although there are minor issues , this use case(3RD) is mostly ready to be documented and demonstrated in Gazebo Simulation environment. The current 2 issues are ;
smac_planner
for it, (nice plans indeed 👍🏼 ) but In the current form my robot performs pretty bad when taking turns, its control related and can be addressed. robot_localization
; this has been hard to deal with. issue is; map
frame sometimes explodes, it happens time to time, I have gone through all the parameters, so far i couldn’t figure out why would odom frame be so stable but not the map frame
this is only reason I have not achievedfigure out why would odom frame be so stable
Your odom probably only has IMU / odometry and the odom frame per REP-105 should be smooth and continuous. The map frame transform does not require for that and you're integrating GPS measurements. If the GPS model has added noise, then you would see that happen.
To update here with progress...
I think I can say that the GPS waypoint following(3rd case) is OK to be demonstrated now. You may have a look at this video.
In the demo There is 6-7 GPS coordinates(lat,long,alt
) they are rather sparse(3-6 meter between each), After the robot_localization
localizes the robot, a node named gps_waypoint_follower
takes in GPS coordinates from YAML, for instance;
#[lat, long, alt]
gps_waypoint0: [-2.263097140589225e-08, -3.362884290260419e-05, 0.6342230932787061]
gps_waypoint1: [-5.88445328989623e-08, -8.854168442669975e-05, 0.634228971786797]
gps_waypoint2: [-8.401593503376237e-08, -0.0001260507704191531, 0.6342366030439734]
gps_waypoint3: [-1.5119029095705861e-05, -0.00017085240798748728, 0.6343228798359632]
gps_waypoint4: [-5.5533304225340664e-05, -0.00022202050093127968, 0.6342789707705379]
gps_waypoint5: [-0.00011934284522280326, -0.00024864146666783363, 0.634503205306828]
then the node converts them to map
frame and calls on FollowWaypoint
action to execute all of them respectively.
Let me know if you want to see a different setup(e.g more GPS waypoints, or denser) than the one in video, I can try to create that setup. I extracted the GPS coordinates quite manually that is why I didn't have too many waypoints to follow.
I did same demo on another Gazebo outdoor world where the ground is uneven, it does perform OK too but its painfully slow, real time factor of Gazebo goes low because of too large area and no dedicated computer for that ATM. As I said previously I currently do not have hardware to test on but I will do in near future. If you want this demo to be documented and the code to be included in nav2_tutorials, I can do that now.
The costmap we see is clearly the local rolling costmap (since the path planner goes off of it) - just verifying that when you do visualize the global costmap it is also rolling since not visualized.
gps_waypoint_follower
, is this just an analog to the Nav2 WP follower but with GPS rather than cartesians (and loaded from file vs action)? Is there a reasonable way to homologate these (e.g. expose a GPS waypoint following action that converts and calls the normal one)? Is that conversion from GPS coordinates -> map coordinates something general we can use?
I think this meets all the core needs for Demo 3 - GPS only, unsized environment, planning between waypoints. Ideally for the formal documentation we could use a more complex environment if simulated (like a campus simulation area or add some buildings or something) but this is the core demo.
I agree this is sufficient to document / explain configuration file changes / any mods or new packages into Nav2 to handle GPS even without hardware. We can use a simulated experiment to show it as a placeholder for a real hardware robot (or just keep the simulation if the environment is demonstrative enough). For the 'show time" screen capture, it would be good to have the global costmap showing so that the users can see it rolling visually. I think this is good to go to start writing up the documentation! Awesome job :-)
An aside, are you using an open sourced controller plugin? I see it goes back a little, I know TEB does that but just a little curious.
Edit: I shared this video on the slack in the cool things to share channel. I tried to tag you but it looks like you're not on there under your name or github ID that I could find.
Actually both costmaps are rollling enabled. I set the alpha
quite low for the global costmap visualization in RVIZ, that could be why its not taking attention or very visible, but if you stop video at 2:00 you can see the edges of global costmap.
gps_waypoint_follower, is this just an analog to the Nav2 WP follower but with GPS rather than cartesians (and loaded from file
This node does not duplicate nav2_waypoint_follower
, it makes use of nav2_waypoint_follower
's action server just as this does;
but before that it does GPS coordinates -> map coordinates conversion.
In the current form it isn't really an Action
but I could put into that form to increase usability of it.
I think this meets all the core needs for Demo 3 - GPS only, unsized environment, planning between waypoints. Ideally for the formal documentation we could use a more complex environment if simulated (like a campus simulation area or add some buildings or something) but this is the core demo.
Yes that would be better to show it in an environment that has more visuals, I will take care of that and come up with new video. Good to hear that this has met the requirements of demo 3. I will work on writing up documentation, configurations and steps to reproduce results. One thing is, the robot model I use here is custom made and I have not open sourced it yet, eventually I will but since it has potential to be used in some academic work that could be on later than I finish up documentation. So there is 2 alternatives here for people to reproduce results on their machine; I can add similar sensor suite to a turtlebot3 and use that for the demo, or record a bag file and use that bag file for people to reproduce same results. I think once I have a bag file and a scheme that shows topics, their message types and demonstrative video of results, the documentation will be quite clear enough how people can make use of it.
For the 'show time" screen capture, it would be good to have the global costmap showing so that the users can see it rolling visually.
I will change the RVIZ settings and make sure global costmap is clearly visible.
Yes this is TEB controller for dealing with ackermann kinematics. I guess DWB inst suited for this kind of robot. I am not in slack, but I would be happy to join, if there is a link I could make use of it to join :)
I'm not concerned if the robot model is open source. That doesn't impact the GPS following settings changes so I think you're good to go there. Other people can set it up with their respective robots. No need for TB3.
Ah ok, https://join.slack.com/t/navigation2/shared_invite/zt-hu52lnnq-cKYjuhTY~sEMbZXL8p9tOw link to join.
DWB isn't currently setup to do ackermann, but it could be. Really the only thing missing is an ackerman trajectory generator plugin
. We have a standard trajectory generator
that we use for omni / differential robots. Basically, we just need a different one that creates ackermann velocity commands for the critics to score based on collision/path following/other metrics. All that other stuff is the same . Topic for another time, but its in the DWB improvements ticket. If there's interest, that's a relatively easy project to get DWB working with your robot as well.
Here is a better video where robot pays a visit to gas station following a set of GPS waypoints. From time to time the RVIZ lags to update the sensory data but the code works as expected and robot executes all waypoints. I had to switch off some of recovery servers(e.g spin
), as that recovery doesn't really suit this robot.
I am not clear on one thing so might be useful to ask;
gps_waypoint_follower
. I have put it in action form called FollowGPSWaypoints
, it is analogous to FollowWaypoints
. It could be included in navigation2
but it will introduce robot_localization
as a dependency, please do let me know more on this. DWB isn't currently setup to do ackermann, but it could be.
Currently the control is not on the forefront of development for the project but it might be when we achieve some level of accuracy and want improve behaviors of robot when taking sharp turns or decrease that back-forth movement when it adjusts the orientation. Is there any predicted potential advantage of that over TEB, strictly speaking for ackermann ?
Hello, I'm working with a modified SummitXL robot, running on ROS2 (foxy). Gps localization and the robot motion controll basic stuff are alredy working. So if you like, with your help , I could test the whole thing on a real robot in a real environment en
HI @davidgrenner , I am currently working on the documentation for achieving GPS waypoint following. After I have that ready(next week hopefully) I think it will be great that you try this pipeline on your robot so we can experiment the response on a real system.
Should all the code be included in tutorials repo ?
Hopefully all the code (minus your custom robot) should live somewhere in a nav or related repository. On the environment / waypoint coordinates: tutorials is probably the place. For the actual waypoint following logic, I'd like that to be in the main repo,
follow waypoints
supported thingAdding a dependency for 1 package on RL is totally OK.
Is there any predicted potential advantage of that over TEB, strictly speaking for ackermann ?
Its a more tunable cost function, you can add new critics or weight them as you like in order to get desired behavior. TEB works when it works, but there's very little introspect-ability or tuning for specific behaviors. DWB lets you create, remove, or add arbitrary critics to have the behaviors you're looking for. If TEB works fine for your needs, no need to reinvent the wheel so stick with it, but if you need to tune it to get specific behavior you want, you may find it very hard to tune or impossible to get specific behaviors you may want. Its application and developer specific.
Thanks for volunteering @davidgrenner ! Like @jediofgever said, we'll be ready for that soon and we'd love to have a hardware video too (maybe also help document one of the other demos!)
@SteveMacenski thanks for the details on DWB critics. For now TEB works quite well for me (even better than I wished for), so I might revisit controller part on a later time.
There are two PRs open now(referenced just above this comment), first introduces GPS waypoint follower action
interface as a standalone package into navigation2
,latter one adds a demo node to navigation2_tutorials
on how to use the provided action
.
There will be another PR in navigation.ros.org
, that explains code, components and the flow briefly, also will touch a little onto robot_localization
's mission in all of this.
Hello @SteveMacenski, @jediofgever I am a roboticist working on a similar problem as discussed above. I am working on a rover that needs GPS waypoints. I am planning on localizing the robot using a Robot localization package and the navsat transform node. I am going with dense waypoints ( a meter apart at most) so I can go with pure GPS, with no SLAM and planning in the intermediate states.
I have a few questions, could you please clarify? Thanks a lot!
# namespace of sources of data.
observation_sources: scan
Can a vector/array be included here to encompass multiple sources? The keyword for camera?
Hi @KSorte , Unfortunately, this was quite some time ago. My commits were added to the main branch of my navigation2 fork https://github.com/jediofgever/navigation2
@jediofgever, Thanks a lot for the resource!
Using RL nav sat transform to create an outdoor GPS waypoint following demo, both with and without fusing in some other mapping methods (v-slam, lidar slam, localizers, etc).
The case of with SLAM is for increased accuracy and having a map to for a planner to work in if the waypoints are too sparse in a complex environment for the local planner to reliably navigate. If you're in a maze and the waypoints are only defining the end goal, you will probably need a planner to route through the maze to follow the GPS.
If you're in an open space, the waypoints are dense, or there's a straightline view from each waypoint, then a planner is largely unnecessary and SLAM wouldn't be much additional help, beyond additional robustness in positioning to GPS drift.
For regular interval waypoints in potentially massive (1kmx1km) dynamic environments where its not realistic to map, have the global costmap be rolling with size sufficient to capture the current and next goals to plan within to get around large obstacles that the controller may not be able to reliably navigate around. The rolling costmap for global then allows you not to have to map this massive space or have a full 1km x 1km costmap in memory.
Rel. links: https://docs.swiftnav.com/wiki/ROS_Integration_Guide https://answers.ros.org/question/218137/using-robot_localization-with-amcl/