janfeitsma / MRA-prototype

MRA (Robocup MSL Reference Architecture) prototype
MIT License
5 stars 0 forks source link

Provide self-localization component #7

Open janfeitsma opened 1 year ago

janfeitsma commented 1 year ago

Consider to implement self-localization in MRA repo.

Use cases:

  1. to enable Falcons robots to play on fields with general line configuration, such as Ambition Challenge field (picture below)
  2. enable junior teams to implement robot localization using component(s) that just work out-of-the-box

Details: relevant Falcons code has been stable and performing well, not touched for many years. It has two parts:

  1. in vision (actually multiCam) package, let's call it FalconsLocalizationVision
    • input: observed white pixels in RCS
    • configuration: basic model of field lines (A=22, B=14 etc as defined in rules, typically measured on-premise with a mm-accurate cheap laser) -> this is the limiting factor for going to Ambition Challenge field
    • output: one or more candidate location(s) in FCS
    • method: use Simplex algorithm (opencv::DownHillSolver) to fit given pixels with expected pixels
  2. in worldModel package, let's call it FalconsLocalizationWorldModel
    • input: FCS vision candidates, encoder velocity feedback in RCS, no compass/IMU
    • configuration: some tuning factors
    • output: "accurate" FCS location and FCS velocity
    • method: mainly use encoders, use vision to correct for drift, initialize playing forward

Ambition Challenge field: image (retrieved from https://msl.robocup.org/wp-content/uploads/2023/01/Rulebook_MSL2023_v24.1.pdf#section.3.3)

Open questions:

  1. ~get second opinion from Andre~ - done. Decided to leave dewarp out of scope for now, it is more general than self-localization and basically a different (upstream) component, unless users would bring this up as a highly desired related component.
  2. towards a go: find some team(s) outside Falcons who would want to actually want to use this ... -> Jan to email teams
janfeitsma commented 1 year ago

It is my expectation that in their current state, Falcons keeper robot would perform OK-ish, but field robots would struggle a little bit with this field. Vision would give relatively few candidates because of the extra/strange lines. The effect could be long dry-spills at FalconsLocalizationWorldModel while driving on encoders, diverging WorldState from player to player, at some point causing bad passes etc.

To contain it, we could tune things we normally never tune, but we might also run into limitations/sensitivities we've never seen before.

This is why I think we actually should just generalize the configurability of FalconsLocalizationVision to allow configuring the lines as drawn. Will ask Andre (our vision expert) for a second opinion.

andrepool commented 1 year ago

As you already mention, the localization performance will degrade, and likely provide wrong locks.

The localization procedure starts with creating a field map that represents the actual field (by just drawing lines and circles from the measured values). It is relative simple to extend this with the additional lines, preventing you have to deal with wrong locks.

Another important aspect that, missing in the list, is the dewarp including the calibration procedure. That is crucial for for the algorithm.

Depending on the provided camera image, you might also might consider the even older omniCam instead of multiCam.