autorope / donkeycar

Open source hardware and software platform to build a small scale self driving car.
http://www.donkeycar.com
MIT License
3.05k stars 1.28k forks source link

Adding velocity to the model #921

Open Ezward opened 2 years ago

Ezward commented 2 years ago

Adding velocity to the model

We have encoder parts that can estimate the vehicle's distance and speed. So how would we use that information? Ultimately we want to do a better job of managing the vehicles speed. (We may also want to use this for kinematics/path planning and obstacle avoidance, that that will be a separate issue) . Our linear model current uses throttle as a proxy for speed. This can work provide that the data has throttle values that correlate well to where the user is on the track. This worked really well for me on the old track mostly because I kept full throttle most of the time, so the model pretty much always predicted full throttle. I never really slowed very much, so the model worked. However, the new track is more challenging and has very tight turns, so the driver must modulate the throttle much more than on the old track; this makes predicting a high dynamic range of throttle values to be much more important. This turns out to be very difficult. For instance, I gathered a lot of data on the new track, but because of all the turns and loops my throttle was all over the place. The resulting model is terrible at predicting throttle. For instance, there are places I would essentially 'coast' to slow down; those are near zero throttle but velocity would only decrease slowly.

So throttle is not as good a parameter as velocity. Velocity is also more independent of battery level, presuming we stay within the operational range. Velocity is also more independent of the particular kind of DonkeyCar. So models based on velocity can be shared to better effect. Models based on velocity have the potential to be much better on the new track than those based on throttle. So I think adding velocity to the model is a great idea but there is a little more to do to really get the value out of it

The parameter we would want to include in the model is not the encoder count or distance; we want velocity. The current encoder parts already output velocity; we may want to enhance this a little to allow for smoothing velocity over a short time. We also need timestamps so we can line up this data with other data in the pipeline; the encoder part will produce distance, velocity and a timestamp. The velocity may or may not be smoothed depending on how the encoder part is configured.

For the model we would replace the throttle parameter with velocity; we would keep the steering parameter. So the model will train on images tagged with steering angle and velocity and will infer steering angle and velocity given an image.

Once we can measure velocity and infer a desired velocity, then we want to be able to make the car drive at the desired velocity. We need to add closed-loop speed control to the vehicle. So we would have a part that, given a velocity in the working range, would maintain that speed by increasing or decreasing the throttle to make the measured speed match the target speed. I do that in this project. DriveWheel::_pollSpeed(), (note: this is C++), this reads the encoder value at the current time and uses it to calculate a velocity using prior encoder values. It then compares the measured velocity to the target velocity and increases or decreases the throttle PWM to try to match the target velocity. (This is a very simple constant step controller; it has the advantage of being very easy to calibrate, but it is slow to react. We probably want to use a PID controller for racing.)

So tasks are

  1. modify encoder parts, ArduinoEncoder and RotaryEncoder in encoder.py:

    • output distance, velocity and a timestamp for every measurement. We want all of that data because we may use it in subsequent parts of the pipeline (for instance, in a kinematics part)
    • Produce a smoothed velocity given a time span to average over. If time span is not specified, then no smoothing is done (acts like current parts).
    • Add the SPEED_SMOOTHING_MS to configuration parameters.
    • IN-PROGRESS: see branch 921-next-generation-odometer-parts
  2. Modify the pipeline in complete.py

    • When constructing the encoder, pass the speed smoothing parameter if it is configured.
    • Change pipeline so the encoder part outputs the distance, velocity and timestamp into the pipeline (it currently only outputs velocity)
    • Change the pipeline so that the image, steering and velocity are written to the tub in drive mode. DONE: this already existed.
    • IN-PROGRESS: see branch 921-next-generation-odometer-parts
  3. Starting with the linear model, create a new model where we use velocity rather than throttle as a scalar input and output. This should be very easy. This will need another model name, like LINEAR_VELOCITY, so it can coexist with the current model. We maintain the steering angle parameter; so the model will infer both steering angle and velocity.

    • modify cfg_complete.py so this new model type can be chosen.
    • There is one other thing; neural networks like normalized values; for instance our throttle value is from -1 to 1. Velocity is not a normalized value. So we would want to scale our velocities into the range -1 to 1 prior to saving to tub OR prior to training. In any case, we would then need to remember those ranges and apply them when we infer velocity from the model, since it will output -1 to 1, but we want meters per second. I'm not sure if we should make the reverse and forward ranges the same since if we did reverse it would generally be much slower; so we may want to make it work like we do now with throttles; essentially two ranges; max reverse velocity is mapped to -1 to zero and max forward velocity is zero to 1.
  4. Modify the pipeline so that if LINEAR_VELOCITY is the chosen model, it outputs steering and velocity to the pipeline when in auto-pilot mode.

  5. Create a new part that implements closed loop velocity control.

    • This part will take in a target velocity and the current throttle.
    • The part will calculate a new throttle value designed to achieve the target velocity. We probably want to implement this as a PID controller. We may also want to do feed-forward, although we may get fast-startup from the AI Launcher.
    • The part will output the new throttle value that will be used later in the pipeline.
  6. Update the pipeline so that if the LINEAR_VELOCITY model is the chosen model and we are in autopilot mode, it will construct a velocity controller part and insert it into the pipeline such that it can get the target velocity and current throttle as input, and it can output a throttle value that will be used by the motor actuator part; it should be inserted just before the DriveMode part

Optional

TCIII commented 2 years ago

@Ezward,

I have purchased a single output encoder similar to the one in the DC odometry/encoder guide. I have a large 6WD robot chassis where the center two wheels are not powered and have the capability of attaching an encoder wheel that can work with the single output encoder. I also have a smaller 6WD chassis where I can replace one of the center motors with a motor/quadrature encoder and use an Adafruit Metro Mini to read the encoder A/B outputs and provide odometry input to the Rpi via a USB connection. So I can test both single and quadrature encoders. the encoder wheel I will use with the single output encoder has only 20 slots so will be of somewhat lower resolution. On the other hand the motor quadrature encoder is a US Digital encoder that sends 1000 cycles per revolution or 4000 quadrature counts per revolution. The motor has a 30:1 reduction with a max speed of 200 rpm.

Ezward commented 2 years ago

Note that the 'path follower' template, that uses the RealSense T265, uses an encoder and a part called OdomDist that uses ticks from the encoder part to calculate distance; https://github.com/autorope/donkeycar/blob/b1af0aacf488c938231912f89c4bfa5c2b11c649/donkeycar/templates/path_follow.py#L61 So that code separates encoder tick counting from distance calculation. Not sure if that is better. I'm not sure it is worth have separate parts.

TCIII commented 2 years ago

@Ezward,

Interesting.

I have run the path_follow.py template on both a Nano 4GB and a Rpi 4B 4GB and found that the AI became easily confused and lost if the path was anything more than an a big oval track. Attempting to use the AI on a path that used right and left turns was a no go and I abandoned the path_follower.py template as unusable.

Ezward commented 2 years ago

The first step is to refactor the various encoder parts. They have redundant distance and velocity calculations. One of them does velocity smoothing, the others do not.

In a future pull requests we will add unicycle and bicycle kinematic models. So we want to support multiple encoders and odometers, so we can support differential drive in the pipeline. The two odometers would be input to the unicycle kinematic model.

TCIII commented 2 years ago

@Ezward,

I received the US Digital quadrature motor encoder in the mail today and found that the installation instructions, on the Lynmotion website, are sorely lacking in relation to successfully installing the encoder on the back of the motor. I knew this already so it was no surprise. The spacer ring used to mount the encoder assembly on the back of the motor initially did not fit flush to the motor back surface due to the spacer ring sitting on top of the motor power terminal bosses. Filing clearance dimples in the bottom of the spacer ring edge solved that problem. The spacer ring is now glued to the back of the motor. After the CA had set I proceeded to finish the encoder installation by attaching the encoder pwb to the encoder mount, pressing on the encoder wheel to what I thought was an appropriate distance from the sensor and then pushing on the back cover.

I then used the Arduino IDE to compile the encoder.ino program for my Adafruit Metro Mini, but received an error about "no encoder type specified". So I found that I had to install this Arduino Library: https://www.arduino.cc/reference/en/libraries/encoder/. After that the program complied without error and downloaded to my Adafruit Metro Mini. I then attached the encoder cables to the Mini and plugged in the USB connector so I could use the Arduino IDE serial terminal to view the Mini output. Using a small wrench to turn the motor shaft I observed both positive and negative pulse output values from the US Digital motor encoder via the Mini USB output.

With one revolution of the motor geared output shaft, I observed about 12,666 encoder forward pulses. The motor has a 30:1 gear reduction and the encoder outputs 4,000 pulses/mtr rev so the 12,666 pulses per motor output shaft revolution correlates well with the expected encoder output of ~12,000.

TCIII commented 2 years ago

@Ezward,

I installed a single-ended encoder (https://www.amazon.com/DAOKI-Measuring-Optocoupler-Arduino-Encoders/dp/B081W2TY6Q/ref=sr_1_1?dchild=1&keywords=DAOKI+5Pcs+Speed+Measuring+Sensor+LM393+Speed+Measuring+Module+Tacho+Sensor+Slot+Type+IR+Optocoupler+for+MCU+RPI+Arduino+DIY+Kit+with+Encoders&qid=1629653644&sr=8-1) on one of the center unpowered wheels on my large 6WD robot chassis. I used a stepped ream to enlarge the disk hole to around 6 mm which provided a nice snug fit. An 11 mm standoff with a 1/8 inch spacer provided a solid mounting point for the encoder board such that there is around 0.020 of an inch clearance between the outer circumference of the disk and the top of the encoder module.

Here is the result of running DC manage.py drive with input from the single ended encoder:

Stopping Rotary Encoder Distance Travelled: 5.1869 meters Top Speed: 1.555 meters/second

My calculated value for 10 turns of the encoder wheel is 5.185 meters and that closely matches the reported "Distance Travelled: 5.1869 meters". My particular encoder wheel/encoder appears to function as expected.

The single ended encoder wheel/encoder assembly recommended in the DC Odometry/encoder guide had complaints about the output being noisy and it was recommended to install a 33k ohm resistor between the LM393 comparator output and the "+" input. My encoder board does not have a feedback resistor either, but that does not appear to affect the accuracy of its output.

Ezward commented 2 years ago

The new encoder parts are almost done. Now we need closed loop speed control. We have to have closed loop speed control before we even bother with training a model with velocity. The most common way to do this is with a PID controller; I find it hard to calibrate a PID controller, but when calibrated well it responds to velocity changes quickly and minimizes oscillating around the target velocity. The simplest and easiest speed control is with what is called a constant-step controller which is basically; 1) measure speed with encoder 2) if slower than target then increment throttle by some constant; if faster than target then decrement speed by some constant. However, it is very slow to reach the target unless you add another thing; feed-forward. With feed forward you build a curve of throttle to velocity and then when starting from zero you calculate the initial throttle from the curve so that it starts fast. After that you use the constant step controller (or even a PID controller). However, once underway the constant step controller does not respond quickly to large changes in velocity. Also the throttle/velocity curve becomes more inaccurate as the battery drains. So I will probably make the closed loop speed control plug-able and start with constant step controller. You can see how to calibrate a constant step controller with feed forward here; https://www.youtube.com/watch?v=ciDCUUx8MXI

TCIII commented 2 years ago

@Ezward,

A nice analysis of possible closed loop speed control functions. Starting with the constant step controller makes sense as I have tried to calibrate PID controllers for two axis servos and it is not easy.

Ezward commented 2 years ago

The new encoder parts are close to being ready. This branch https://github.com/autorope/donkeycar/tree/921-next-generation-odometer-parts includes code that supports dual encoders for differential drive setups. The code also includes Arduino sketches for single-channel and quadrature encoders with debouncing and supports multiple encoders from a single Arduino. It also includes a Unicycle model kinematics part that estimates the pose for a differential drive robot; for now that is simply used to get the distance and speed average of the two wheels; the pose estimation will likely to be more useful in the path_follow.py template.

Ezward commented 2 years ago

The new encoder parts are close to being ready. This branch https://github.com/autorope/donkeycar/tree/921-next-generation-odometer-parts includes code that supports dual encoders for differential drive setups. The code also includes Arduino sketches for single-channel and quadrature encoders with debouncing and supports multiple encoders from a single Arduino. It also includes a Unicycle and Bicycle model kinematics parts that estimate the pose for a the robot using the odometry input. This is used in the path_follow template as another source of pose. That template now has the T265 as optional. You can still use the template if you only have odometry. If you have odometry and the T265, then odometry is used to improve the T265 pose estimates.

Ezward commented 2 years ago

I just pushed a large set of changes that include a new Keras model that infers forward velocity and angular velocity. We now have inverse kinematics parts (both bicycle and unicycle) that can take the forward and angular velocities and turn these into wheel velocities and/or steering angle. Finally, there is also a simple speed controller, so we can used these inferred velocities to control the robot's speed precisely by using the odometry input to measure the robot's actual velocity, then modify the throttle to make the actual velocity match the desired target velocity. This latest commit is still a very early ALPHA and is unlikely to actually work, since there has been no integration testing yet. Further, I need to document the necessary configuration and calibration steps necessary to make this all work.

Ezward commented 1 year ago

The new encoder->tachometer->odometer->kinematics pose estimation pipeline has landed, see PR https://github.com/autorope/donkeycar/pull/1089 So we now have a way to get an estimate of forward velocity from the encoders. So what is left;