Flash3388 / Flash2024

0 stars 0 forks source link

Limelight AprilTags + Automation Actions #3

Open tomtzook opened 6 months ago

tomtzook commented 6 months ago

Learn how to use the Limelight AprilTag pipelines: configuring, and reading data from the camera. Build testing Actions to drive the Swerve based on info from the Limelight. More specifically, align to target, move to target and general identification of the target or multiple targets.

NoamZW commented 6 months ago

As for now, i wrote actions AlignToTarget and MoveToTarget using the limelight. The limelight itself knows how to detect and analyse the apriltags correctly, and it's already placed on the robot in a "permanent" place. I read those documentations in order to work with the limelight; Everything we need in order to work with the limelight Estimating the distance from a target

The pigeon was unplaged as for the last meeting, so once we replug it, those actions should work.

tomtzook commented 5 months ago

AlignToTarget seems to work so far

NoamZW commented 5 months ago

In the past week, i've been trying to estimate the distance between the limelight and the apriltag, but without a big breakthrough. So far i tried using height and TY (y angle between the camera and the target) of the camera and target, but struggled with error calculation that arose from unprecised7777777777 camera mounting angle and a target that was too aligned with the camera (the TY was close to 0). Calculating distance using Trigonometry

After a while i was straggling with getting the exact values so my calculation would work, i changed the method and used area instead of height. For some reason this method didn't work as well as i hoped for- and in this way i had to find a focal length, which was less efficient, and didn't use services limelight's already providing for us.

Now i started using 3D position, that will give me a vectors in space between the target and the camera. This way is way more efficient and will give me with the right values an acurate distance. For this way i'm using targetpose_cameraspace function. It would give me position in 3D of the target relatively to the camera. Basic Targeting Data

Using this function, i'll be getting the X,Y and Z vector between the camera and target (For example, for the X_vec -> cameraPose3dTargetSpace.getX() ). For calculating the distance, i'll use the following function- *X,Y,Z are vectors : distance = Math.sqrt ( X^2 + Y^2 + Z^2 );

I havn't tried this way yet, so i'll update when i will on the progress.

yaronkle commented 5 months ago

Hi Noam, Regarding the area method. Do you have a table of measured values that you can share? For example: distance1, area on camera1 distance2, area on camera2 ...

NoamZW commented 5 months ago

I haven't written the measurments, next time i'll be working on the robot, i'll check it and get back to you

tomtzook commented 5 months ago

@yaronkle

Hi Noam, Regarding the area method. Do you have a table of measured values that you can share? For example: distance1, area on camera1 distance2, area on camera2 ...

No need for measuring using area manually, the apriltag library provided better and more precise calculations. So the limelight provides us with complete info on it's own, including positioning.

See the API for limelight: https://docs.limelightvision.io/docs/docs-limelight/apis/complete-networktables-api#apriltag-and-3d-data

tomtzook commented 5 months ago

@NoamZW no need for auto actions for limelight motion, we'll use path planning and such to do so instead. Work on checking field positioning based on april tag + general field positioning with odometetry