gt-marine-robotics-group / Virtuoso

ROS2 autonomy architecture for Georgia Tech Marine Robotics. Designed to be modular and work with any combination of sensors + motors.
8 stars 3 forks source link

Implement state estimation algorithm #1

Closed ragingonyx closed 2 years ago

ragingonyx commented 3 years ago

As part of the localization server, we need a kalman filter to obtain accurate state estimation in a noisy environment. This is most easily done with an Extended Kalman Filter, though this can in theory be any algorithm for state estimation. Some of the others worth looking into are Unscented Kalman Filters and Factor graphs.

The priority is getting any of these algorithms working ASAP so that we can system test other aspects of the architecture that rely on state estimation.

The scope of this issue only covers the development of the algorithm and not the deployment within ROS, which will be a separate project. We want to focus on getting the core logic correct and working before we integrate with the overall system.

Some good stating documentation can be found here:

Paper outlining differences between EKF and UKF and the math behind it. Page explaining in simpler terms what a UKF is and how it's useful compared to a EKF. Existing implementation of a basic kalman filter in ROS2.

colineRamee commented 3 years ago

Daniel Foreman did some work on a Kalman filter last year. The main gotcha with these filters is the need for a plant model (i.e. a model of the vehicle), and they require some tuning. The algorithms themselves are very straightforward. As a sidenote, there was no noise in VRX (and I don't think they changed it) and the sensors provide frequent updates, so a Kalman filter will actually have worse performance.

ragingonyx commented 3 years ago

I don't think I accounted for needing a motion model of the vehicle when I initially designed the localization server, that'll have to be fixed in the next iteration. Is it possible to have a mapping of distance to motor commands serve the role of a model? As far as I know, the important part is having a good idea of how much your vehicle is actually going to move if you tell it to move forward say 5 meters. Currently I have plans to create a movement API that houses a PID and maps measurable distance to motor commands, since the motors only respond to on/off for x amount of time.

As for VRX, I'm know for a fact that this development will be useless, however, working state estimation is important when we're talking about any competition that's not completed in simulation, namely RobotX. The development this semester isn't necessarily catered towards VRX tasking, though it does have some inspiration. The idea is to have a modular system that can be used in multiple competitions, not just VRX.

colineRamee commented 3 years ago

Except maybe for sub (which dynamics are very damped), a mapping motor/distance is going to be pretty bad due to the vehicle inertia, you need to know the current velocity on top of the current command to estimate future state. Is the movement API going to be like a "go to waypoint" functionality? What do you mean by the motors only respond to on/off? All the motors we used have a roughly proportional response to the command. I agree that a good filter will be very useful for other competitions!

ragingonyx commented 3 years ago

Couldn't we get current velocity from our other sensors though? Our IMU has an accelerometer that should give us acceleration readings that we could use to get velocity. We also have a GPS that gives us absolute position that we could also use to get velocity. Unless I'm grossly misunderstanding something, we should be able to do some type of sensor fusion on that to get a decent estimate of our velocity.

My way of circumventing the inertia is to:

  1. Sample multiple distances with the motor commands required to travel those distances
  2. Compute a regression model (either with ML or any other form of regression) fitting to a non-linear function that will map motor command to distance. This will provide us a general mapping to start with.
  3. Dynamically fine tune the model by "closing the loop" basically meaning verifying that the mappings work with each attempt to travel a certain distance. This will be done primarily with the other sensors on board, namely our camera and lidar. We could attempt to optimize for the error between our actual distance from a given feature (CV algorithms help here) and our projected distance.

Step 3 is something I think could be done by the PID so it might not even be necessary (open to any thoughts about this).

The movement API is meant to provide an abstraction from the motor commands. An example usage would be this: the waypoint navigation server calculates the next waypoint at (x,y) and requires D distance to travel from the current point. The movement API would recieve this distance (would also handle orienting the vehicle in the right direction) and use the mapping function to call the motor command required for D distance.

I meant that we can't give distance as an input to the motors, we can only say run at this power for this amount of time. Unfortunately a normal person doesn't think in terms of motor commands so the idea is to provide an easier interface to work with for waypoint navigation. Again, this is only based on what I know myself about our existing system, so I could be completely wrong.

ragingonyx commented 2 years ago

Another Kalman Filter resource: http://www.roboticsproceedings.org/rss01/p38.pdf

startrek1wars commented 2 years ago

https://automaticaddison.com/sensor-fusion-using-the-robot-localization-package-ros-2/

Tutorial to look into for ekf/ukf in robot_localization

startrek1wars commented 2 years ago

http://docs.ros.org/en/melodic/api/robot_localization/html/state_estimation_nodes.html

ekf/ukf parameters in robot_localization

startrek1wars commented 2 years ago

From Coline:

"In general EKF/UKF can accept any model (typically given the current state and the command what is the derivative of the state). So if you have a motion model you can implement these filters. If you don't have a motion model (for instance if you're making a filter that goes on a sensor that can be on any type of vehicle, or as in the robot_localization package they fuse the sensors data but don't ask you to provide a motion model) then you can make a crude motion model that is basically assuming that the acceleration is constant (I think don't quote me on that) and propagating that assumption to the other states (velocity, position) while taking into account the position of the different sensors. Mary-Catherine Martin used the robot localization for VRX but she had weird results (it diverged).

Some good resources: https://github.com/cra-ros-pkg/robot_localization/blob/indigo-devel/src/ekf.cpp#L209, https://answers.ros.org/question/221837/robot_localization-ekf-internal-motion-model/" "The algorithms are only a few lines if you use matrix libraries. " - implementing it myself might be the best solution after all

startrek1wars commented 2 years ago

https://thekalmanfilter.com/kalman-filter-python-example/ I like this example to showcase how the filter is able to use position data for velocity estimation, even if there is no velocity measurement.

startrek1wars commented 2 years ago

https://www3.nd.edu/~lemmon/courses/ee67033/pubs/julier-ukf-tac-2000.pdf

See this paper for a suggestion on how to keep the covariance matrix positive definite

startrek1wars commented 2 years ago

https://github.com/methylDragon/ros-sensor-fusion-tutorial/blob/master/01%20-%20ROS%20and%20Sensor%20Fusion%20Tutorial.md http://docs.ros.org/en/jade/api/robot_localization/html/index.html https://automaticaddison.com/set-up-the-odometry-for-a-simulated-mobile-robot-in-ros-2/ Useful robot localization tools.

https://pyproj4.github.io/pyproj/stable/index.html

Might use if implementing custom lat/long to x/y

http://wiki.ros.org/hector_gazebo_plugins This is the imu/gps plugin vrx uses