RoboJackets / robocup-software

Georgia Tech RoboJackets Software for the RoboCup Small Size League
Apache License 2.0
183 stars 187 forks source link

Rewrite vision filter using multi-hypothesis localization #1202

Closed JNeiger closed 5 years ago

JNeiger commented 6 years ago

The overall process will be somewhat following this approach.

The idea is that on the first camera frame, we will create a "kalman filter" object intialized at each ball position possiblity as stated in the camera packet (Referenced as "camera ball" from here out). Each frame from there we will try to attach a measurement to the existing "kalman filter" objects using some cutoff like chi-squared test or something even simpler like checking a static distance. If it doesn't match a previous filter, we will just create a new one at that point.

Each time a point matches an existing filter, a counter referred to as health is increased up to a max value. Every time-step we also decreases the health for all existing filters. In the linked paper above, this health is referred to as the chi count.

The filters should also taken into account bounces and such off of other robots.

A kick detection system will also need to be developed. This will need to use some factors both relating to changes in the ball state as well as surrounding robots. Ball state stuff would be things like changes in speed, changes in direction etc. The robot stuff will mostly be distance based and within the mouth.

Additionally, it is very important to figure out where the kick is going. In this case we will use some sort of least squares or nonlinear model on the raw vision packets attributed to this filter to figure out where the ball is going quickly.

The same approach for the filters will be applied to the robots themselves. The only difference is that it will be a 6 state (xy position/velocity and angle/angular velocity) filter instead of 4 state (xy position/velocity). The same health and such will be used.

This will basically replace everything in the soccer/modeling folder.

jpfeltracco commented 6 years ago

@JNeiger How do you think this will handle small camera calibration differences? Playing some old vision logs it's apparent to me that the cameras are rarely calibrated that well, leaving us with two solid position estimates of the same entity with some static offset.

Do you think there would be a benefit to tuning the system so that those different camera measurements are registered as different hypotheses or would we want a single kalman filter handling that jitter?

JNeiger commented 6 years ago

So my plan to keep the covariances low on the kalman filters was to do everything in the camera frames and then merge the kalman filter things from each camera frame together using a weighted average of the covariances of the best filters in each.

I haven't put too much thought into the merging offset problem, but I imagine building a calibration routine to this wouldn't be that hard (tm) to implement within this system. I'm thinking maybe just putting the ball in the camera intersection areas and estimating the offsets for each camera to minimize the jumps between the camera positions. I think we can do some sort of skew/rotation matrix to get a good transformation from ideal camera frames to real world camera frames positions. This can be computed using another script or something since it's just a simple optimization problem and saved as config.

JNeiger commented 6 years ago

Should fix #1091