srobo / competition-simulator

A simulator for Student Robotics Virtual Competitions
https://studentrobotics.org/docs/simulator/
MIT License
8 stars 2 forks source link

Approximately match kit's vision implementation #366

Closed WillB97 closed 1 year ago

WillB97 commented 1 year ago

Moves the vision implementation to use the output data structures used by the kit. Cartesian, Spherical and Orientation have been verified.

Outstanding:

Currently only tested in R2023a.

Fixes #360

Builds on #359

trickeydan commented 1 year ago

What is the impact of changing the quaternion library on the rest of the kit?

How robust is this new library? It is not widely used, has limited tests and no typing.

Can we be sure that dropping numpy won't break student code? Is this definitely backwards compatible?

Why are we copying in the code rather than submoduling or using a PyPI package?

PeterJCLaw commented 1 year ago

Some context here:

The simulator distribution doesn't (currently) have any dependencies during development. This is very much by design -- so that competitors don't need to install anything extra to use it. While that may be a decision to revisit in a future year, I don't think it's something we want to change during the competition year.

The removal of numpy here is presumably(?) a change relative to early versions of the PR rather than to what's released -- the simulator as published doesn't have any dependency on numpy (though competitor code is welcome to as we guarantee that it'll be present in the competition environment).

The same is true of the quaternion library -- previously there wasn't one, so if there's one here now and the API is compatible I think it's fine (on that front). (No comment on its suitability or quality)

WillB97 commented 1 year ago

Squaternion is a pure-python implementation of the quaternion maths we need, teams will have numpy in the virtual competition but may not have it during development.

The remaining differences from the kit API are:

PeterJCLaw commented 1 year ago

Testing this locally in R2022b using the keyboard robot fails to find any markers, even when the image overlay does show markers in shot which are annotated with the red border (i.e: the recognition objects have been found).

PeterJCLaw commented 1 year ago

The CI failure here is due to the tuple re-ordering as noted in https://github.com/WillB97/april_vision/issues/8. Since fixing that is potentially itself breaking the solution isn't clear right now, however I definitely don't think we should introduce the same breakage into the simulator.

PeterJCLaw commented 1 year ago

My testing of this seems to be not finding markers when they're near the edge of the camera's field of view, including cases where the maker is only just overlapping the edge of the image; cases where I would definitely expect the real vision system to pick up the marker. Is that expected?

PeterJCLaw commented 1 year ago

My testing of this seems to be not finding markers when they're near the edge of the camera's field of view, including cases where the maker is only just overlapping the edge of the image; cases where I would definitely expect the real vision system to pick up the marker. Is that expected?

https://github.com/srobo/competition-simulator/pull/366/commits/ec23ea7cb67c1033fa8f05f2735cac73465bd16a fixes this by moving the corner points inwards so we're not caring about the whitespace boundary.

PeterJCLaw commented 1 year ago

Testing suggests that this allows detection of markers at arbitrarily oblique angles, something that the previous implementation did not (it capped recognition at 75°). We should probably reinstate this to avoid silliness.

PeterJCLaw commented 1 year ago

Testing suggests that this allows detection of markers at arbitrarily oblique angles, something that the previous implementation did not (it capped recognition at 75°). We should probably reinstate this to avoid silliness.

I had a go at trying to implement this, with the core being this function:

def obliqueness_to_camera(marker: Marker) -> float:
    """
    Compute the angle between the marker's normal and its position vector.
    """

    # Reference direction for a marker is along the axis the camera is facing
    # Use `CartesianCoordinates.from_tvec` so we get matching axis conversions.
    reference_direction = vectors.Vector(CartesianCoordinates.from_tvec(1, 0, 0))

    rotation_matrix = matrix.Matrix(marker.orientation.rotation_matrix)
    marker_normal = rotation_matrix * reference_direction

    position = vectors.Vector(marker.cartesian)

    return vectors.angle_between(position, marker_normal)

however the vectors I'm getting for the vector normal to the marker don't seem to make sense. What am I doing wrong here?

WillB97 commented 1 year ago

Testing suggests that this allows detection of markers at arbitrarily oblique angles, something that the previous implementation did not (it capped recognition at 75°). We should probably reinstate this to avoid silliness.

The current kit vision is capable of detecting markers at oblique angles (beyond 90° at the edge of images). Given that they only have 3.5 weeks left I don't see an issue with vision being slightly better.

PeterJCLaw commented 1 year ago

The current kit vision is capable of detecting markers at oblique angles (beyond 90° at the edge of images).

No, it isn't. Beyond 90° is physically impossible (that implies the marker is facing away from the camera) and local testing suggests that the borderline cases where the marker is very oblique are also not picked up.

8BitJosh commented 1 year ago

The current kit vision is capable of detecting markers at oblique angles (beyond 90° at the edge of images).

No, it isn't. Beyond 90° is physically impossible (that implies the marker is facing away from the camera) and local testing suggests that the borderline cases where the marker is very oblique are also not picked up.

image

Beyond 90 isnt impossible, the angle is to the flat plane perpendicular to the cameras view, not the angle relative to the cameras position. See the attached diagram as an example

PeterJCLaw commented 1 year ago

Beyond 90 isnt impossible, the angle is to the flat plane perpendicular to the cameras view, not the angle relative to the cameras position. See the attached diagram as an example

Ah, I think you've misunderstood what calculation I'm doing. I'm comparing the orientation to the vector along the line from the camera to the marker. Thus the zero angle (and the 90°) is relative to the position of the marker. 90° always represents a marker arranged perfectly edge-on to the camera.