Closed thaytan closed 2 years ago
Would these equations assist with that?
What I had in mind when I wrote this is a model that would take into account the luminosity of the LED, the physical size, the distance and angle to the camera in the proposed pose, and spit out an expected brightness range for the pixels, and the size of the bounding box in pixels, and possible also a 'shape' to match against. When checking if the camera-observed pixels match a proposed pose, it would be good to know that we expect an LED to only appears as 1-2 pixels for example, and not match against a blob that's many times bigger than that.
That's definitely in 'future refinement' territory though - it'll produce more accurate and stable pose observations, but isn't critical to "getting something working" at this stage.
Would this have to be done via mathhh, or would we be able to help by gathering data?
Maths mostly, and definitely an item for the distant future still
9f118cd9970d7cf00324a4a25e25ae15ccefabd2 adds some prediction of the expected size of LEDs at a given distance. Hard to tell if it helps yet, but it didn't seem to hurt.
There was a bug in this initial implementation, fixed in 02d65563b35722c733eedb41d03f8c38f5f00c02. I'd say it's working quite well, especially as the devices move further from the camera.
If we use information about the expected size of an LED at a given distance, we can more intelligently constrain the match distance for blobs to LEDs in rift-sensor-pose-helper.c