Summary
Vision tracking will be pretty important this game, so we need to be doing some of it. To do our computer vision, we will probably use a Raspberry Pi with PhotonVision, so this subsystem doesn't need to do any actual vision pipelining. It only needs to handle getting values from PhotonVision and controlling the LED ring for the reflective tape. To do this, it should have the following functions:
a function to get horizontal error between the robot's center and the cone poles
a function to get the estimated distance to the cone pole
a function to align with the cube scoring location (the one with the AprilTag)
a function to get distance to the cube scoring location
a function to get angle of the cube scoring location (iirc there's only an AprilTag on the middle one, so to align with the top one, we'd need to add an offset)
a function to reset the subsystem (although this doesn't need to currently do anything)
note: each traffic cone pole has its own tape iirc, so we'll need to adjust whether we're aligning with the top or middle one depending on the button press or position of the arm
Summary Vision tracking will be pretty important this game, so we need to be doing some of it. To do our computer vision, we will probably use a Raspberry Pi with PhotonVision, so this subsystem doesn't need to do any actual vision pipelining. It only needs to handle getting values from PhotonVision and controlling the LED ring for the reflective tape. To do this, it should have the following functions:
note: each traffic cone pole has its own tape iirc, so we'll need to adjust whether we're aligning with the top or middle one depending on the button press or position of the arm
Work Required
Verification