autorope / donkeycar

Open source hardware and software platform to build a small scale self driving car.
http://www.donkeycar.com
MIT License
3.05k stars 1.28k forks source link

Add an improved path recording and follow algorithm #1025

Open Ezward opened 2 years ago

Ezward commented 2 years ago

We've done work to incorporate RTK gps into the path follow template. See https://github.com/autorope/donkeycar/issues/991

I have the gps following the path (still need to do a better job on PID, but it is clearly following the line), but it get's lost when it get's back to the start. This gives a little insight. The only device that does not get lost when it get's back to the start is the T265; I believe that is because the T265 has loop closure build into it's algorithm. I think our algorithm is getting lost because we tend to collect some weird points at the end of a path recording session. This is proposal to clean that up.

Our current path follow completely ignores heading; it only looks at cross track error. That is not as good as using heading. Think of it this way; if the vehicle is at position (x,y) and off the line (so there is some error to be corrected) then there may be 3 states; the vehicle is pointed away from the line so if it continues on that heading it would get farther from the line and so the error would increase OR it can be tracking exactly parallel to the line so if it continues on that heading the error will not change and it will never achieve the line OR it can be pointing toward the line in which case it if continues on that heading then the error will decrease and it will achieve the line. So even though the vehicle is at the same position with the exact same cross-track error we would want to apply very different control signals to the vehicle.

I used heading every explicitly in the go-to-goal behavior of the ESP32CameraRover; it only looks at the points to be achieved and does not use any line between waypoints. It knows where it is (based on encoders/kinematics) and it knows the position of the waypoint to be achieved. It then calculates the line between itself and the waypoint. That is the heading it needs to achieve the waypoint (that is the direction it must point in order to intersect with the waypoint as it moves forward). It then compares it's current heading to the heading to be achieved and uses this error to decide if it should turn left or right or just keep straight. The idea is that as long as you are pointing towards the waypoint and are making progress then you will achieve the waypoint. It does not care what the last waypoint was.

That is a very different algorithm then we are using now in donkeycar.

There is another parameter we care about; what does it mean to 'achieve' the waypoint; in reality a point is infinitely small. So we really treat the waypoint as a point and a radius with creates a circular area; when the robot is within this area we say it has achieved the waypoint. So how do we choose the radius for a waypoint. What I did was to calculate the minimum distance I could actually measure using the encoders. So that involved figuring out how far the vehicle would travel between two ticks; that is the shortest distance that could be measured. I used that as the radius. That worked well because teh ESP32 rover is differential drive that can turn in place so it's turn radius is infinite. We don't do that in donkeycar; we can't turn in place; we can pivot on one wheel, but that can get you stuck trying to achieve the waypoint. It's even worse for a car-like (Ackerman steering) vehicles which could just start orbiting the waypoint. Of course GPS is a little different because it's accuracy is different between devices and even changes for a given device over short periods of time. You can actually get the GPS's estimate of it's accuracy and so we could use that and dynamically change the waypoint radius if we wanted to. But I think it would probably be better to just warn the user if the gps's accuracy drops below what we need.

So I think I would make the radius of the waypoint a configuration property, so we can change it. It probably needs to be no more than 1/2 the minimum distance between any two waypoints.

With this insight in mind we may want to change the way we record waypoints so that we get a sensible looped path and so we don't record waypoints that may confuse the algorithm. Rather then just record a waypoint when we had driven a given distance, we should record a waypoint when we are a given distance from the last waypoint we recorded. These are subtly different; if you turn in a tight circle you may never get very far from the last waypoint, but you will drive more than the minimum waypoint distance . Such a set of close waypoints would confuse the algorithm.

This also leads to an algorithm for path closure when recording waypoints:

That algorithm for choosing waypoints seems like it would work well even for our current path follow algorithm.

If we have reliable heading then the path follow algorithm could be improved:

So how much should we change the steering give the heading error? We could use a constant value when changing the steering; that would be called a constant step controller. It works ok if things are not changing too fast. That does not require any configuration (except maybe choosing the constant delta steering angle, which we could put in configuration). Alternatively we could use a PID algorithm to choose the appropriate change in steering angle; that requires tuning the PID but it could respond faster. So I would suggest that we default to the first and let them turn on the PID optionally.

Note that we can modify the behavior when we get back to the origin waypoint; be default we keep going and just keep looping. We could also just let the user stop at the origin.

Note that we could modify the behavior that happens when we get back to the origin waypoint; by default we keep going and just keep looping through the waypoints. Alternatively we could let the user stop at the origin. That choice could be in configuration.

Of course, for this to work you need a good heading estimate. GPS, even RTK gps, may not provide a great heading; we need to test this. We can update the current NMEA parser to also parse out the heading, then we can drive a rectangle and see how this changes. We can compare this against what we would get if we calculated heading using prior gps reading(s). We likely will need to implement an IMU and sensor fusion to keep a good heading, especially because GPS can glitch; if we have a reliable magnetometer then that would also be helpful. If we have encoders available then we can also use kinematic pose estimation as a component of the fusion.

TCIII commented 2 years ago

@Ezward,

Nice analysis and proposed solutions and limitations.

I have a 2WD robot chassis equipped with both a T265 and dual high resolution quadrature encoders so I will be available for testing any updates to the path_follow.py template.

I also have two GPS modules, a Matek SAM-M8Q and a Sparkfun NEO-M9N with an active antenna, that I can use to test non-RTK GPS modules with the path_follow.py template on a big 6WD robot chassis that is also equipped with dual mono encoders.

Regards, TCIII