Open WesleyJ-128 opened 1 year ago
LimeLight subsystem created, next steps:
Create a function within LimeLight subsystem to switch between pipelines
Diego was working on this but hasn't gotten their Github account added yet, once it is we can assign this.
I'll likely add this limelightvision.io tip to our Slack #cad channel:
And from page 41 of the game manual: GRID AprilTags are centered on the width of the front face of the middle ROW CUBE NODES and elevated such that the distance from the carpet to the bottom of the AprilTag is 1 ft. 2¼ in. (~36 cm). Markers on the DOUBLE SUBSTATIONS are centered on the width of the assembly and are mounted such that the distance from the carpet to the bottom of the AprilTag is 1 ft. 11⅜ in. (~59 cm).
Also, for detecting game pieces, even if we do get a coral, it may be worth seeing how far we can get with just GRIP pipelines. The customized version of GRiP for Limelight is available at their downloads page
And for tracking gamepiece placement targets in the grid, Point-of-Interest Tracking looks useful:
Point-of-Interest tracking allows you to define a 3D point of interest relative to an AprilTag.
Let’s say you are trying to target a field feature that is 6 inches to the left and 2 inches behind an AprilTag. You can simply define that point of interest in the web interface (in meters), and then track this 3D point using tx and ty as if it existed as a real-world target.
As for the Coral support, from Limelight's Chief Delphi post about this year's firmware:
Learning-Based Vision & Google Coral Support (We need your help) Google Coral is now supported by all Limelight models. Google Coral is a 4TOPs (Trillions-of-Operations / second) USB hardware accelerator that is purpose-built for inference on 8-bit neural networks.
Just like retroreflective tracking a few years ago, the barrier to entry for learning-based vision on FRC robots has been too high for the average team to even make an attempt. We have developed all of the infrastructure required to make learning-based vision as easy as retroreflective targets with Limelight.
We have a cloud GPU cluster, training scripts, a dataset aggregation tool, and a human labeling team ready to go. We are excited to bring easy-to-use, zero-code deep neural networks to the FRC community for the first time.
We currently support two types of models: Object Detection models, and Image classification models.
Object detection models will provide “class IDs” and bounding boxes (just like our retroreflective targets) for all detected objects. This is perfect for real-time game piece tracking. Please contribute to the first-ever Limelight object detection model by submitting images here: https://datasets.limelightvision.io/frc2023 54
Image classification models will ingest an image, and produce a single class label.
To learn more and to start training your own models for Limelight, check out Teachable Machine by google. https://www.youtube.com/watch?v=T2qQGqZxkD0 9 Image classifiers can be used to classify internal robot state, the state of field features, and so much more.
Great stuff, thanks Adam! The Coral is only $59.99 which the team can cover (https://coral.ai/products/accelerator). It's really up to the students, if they want to try out the Coral we can get one. If its too much time to invest/look into and none are interested then we don't have to get one. @WesleyJ-128 Would you be interested? Next time I see Diego I can ask
Using the coral for gamepiece classification and acquisition may be easier than GRIP pipelines— I'm not particularly interested personally (at least right now) but it could definitely be a useful feature.
Configure a Limelight to detect whether there is a cone, cube, or no object in the collection zone. May involve acquiring a Coral. This data will be used for automatic gamepiece acquisition. Potentially also detect the orientation of cones.