Closed PeterJCLaw closed 10 months ago
You may want to look at how april_vision tests its marker object. It uses webots to generate marker images with markers at known poses and them feeds the images to a test script to assert the calculated poses are correct. Here the two halves can be combined as we're using webots' detection.
The order of orientation rotations are shown here: https://github.com/WillB97/april_vision/blob/main/tests/webots_generator/worlds/webots_generator.wbt#L37-L44
The maths required to convert the Webots' orientation to yaw, pitch, roll is:
# Unpack the axis angle to allow for remapping and normalisation
_x, _y, _z, angle = recognition.getOrientation()
# Normalise the axis
axis_mag = hypot(_x, _y, _z)
_x, _y, _z = _x / axis_mag, _y / axis_mag, _z / axis_mag
# Remap the axis to match the kit's coordinate system
x, y, z = -_x, _y, -_z
# Calculate the intrinsic Tait-Bryan angles following the z-y'-x'' convention
# Approximately https://w.wiki/7cuk with some sign corrections,
# adapted to axis-angle and simplified
yaw = atan2(
z * sin(angle) + x * y * (cos(angle) - 1),
1 + (y ** 2 + z ** 2) * (cos(angle) - 1),
)
pitch = asin(x * z * (1 - cos(angle)) + y * sin(angle))
roll = atan2(
x * sin(angle) + y * z * (cos(angle) - 1),
1 + (x ** 2 + y ** 2) * (cos(angle) - 1),
)
I'm going to merge this for speed and to unblock other things. Happy to continue to discuss/address things though.
This updates the vision API to match that expected by the SR2024 kit.
Changes
Included here:
Not included here:
main
together).Code review
I suggest we focus on the correctness of the data returned from the API and whether the API shape matches the kit API. It may also be worth splitting the review as I realise the diff is pretty big.
I'm happy to do a walk-through review if that would be useful, alternatively here's a guide to the changes.
The core things to review (and I suggest this order) are:
controllers/test_supervisor/test_supervisor.py
), including validating that the manually constructed test cases there are correct (but don't worry about the generated ones yet)protos/Components/SRCamera.proto
worlds/Tests.wbt
(again ignore the generated cases for now)script/testing/integration-test
,.github/workflows/integration.yml
(renamed fromrun-match.yml
)modules/sr/robot3/camera.py
(it may be easier to review this as if it were new rather than a diff)modules/sr/robot3/vision/convert.py
andmodules/sr/robot3/vision/markers.py
(renamed and simplified fromtokens.py
, reviewing the diff of that may be helpful)april_vision
; I've tried to comment these inline so they're easy to keep in syncThat covers the core of the logic. Much of the rest of the changes are accounting for the change in alignment of the marker proto to webots axes or are other plumbing:
*.proto
andArena.wbt
files shouldn't have changed in a way which changes the SR2023 arena -- that should still look exactly as it did and work the samescript/testing/insert-poses.py
computes the possible combinations of π/4 rotations on each axis and inserts them reproducibly into both our testing world file and our test supervisor; there's validation that things are in sync via the new job in.github/workflows/tests.yml
(this is a general pattern which could be useful for us elsewhere)Verification
I've done some light testing manually with the keyboard robot, though mostly I'm relying on validation from the unit tests. It would be great to do more manual testing that the orientation data in particular is correct.
I've manually compared a small number of the multi-axis markers to the images in april_vision's tests and confirmed that they line up.
Links
Fixes #377