Open janfeitsma opened 1 year ago
It is my expectation that in their current state, Falcons keeper robot would perform OK-ish, but field robots would struggle a little bit with this field. Vision would give relatively few candidates because of the extra/strange lines. The effect could be long dry-spills at FalconsLocalizationWorldModel
while driving on encoders, diverging WorldState
from player to player, at some point causing bad passes etc.
To contain it, we could tune things we normally never tune, but we might also run into limitations/sensitivities we've never seen before.
This is why I think we actually should just generalize the configurability of FalconsLocalizationVision
to allow configuring the lines as drawn. Will ask Andre (our vision expert) for a second opinion.
As you already mention, the localization performance will degrade, and likely provide wrong locks.
The localization procedure starts with creating a field map that represents the actual field (by just drawing lines and circles from the measured values). It is relative simple to extend this with the additional lines, preventing you have to deal with wrong locks.
Another important aspect that, missing in the list, is the dewarp including the calibration procedure. That is crucial for for the algorithm.
Depending on the provided camera image, you might also might consider the even older omniCam instead of multiCam.
Consider to implement
self-localization
in MRA repo.Use cases:
Details: relevant Falcons code has been stable and performing well, not touched for many years. It has two parts:
vision
(actuallymultiCam
) package, let's call itFalconsLocalizationVision
A=22
,B=14
etc as defined in rules, typically measured on-premise with a mm-accurate cheap laser) -> this is the limiting factor for going to Ambition Challenge fieldopencv::DownHillSolver
) to fit given pixels with expected pixelsworldModel
package, let's call itFalconsLocalizationWorldModel
Ambition Challenge field: (retrieved from https://msl.robocup.org/wp-content/uploads/2023/01/Rulebook_MSL2023_v24.1.pdf#section.3.3)
Open questions: