4201VitruvianBots / RapidReact2022

FRC Team 4201 - Codebase for FRC Season 2022
Other
8 stars 2 forks source link

OAK camera code #40

Open mailmindlin opened 2 years ago

mailmindlin commented 2 years ago

Congrats on your performance in Houston!

I'm a mentor on Miracle Workerz (FRC 365), and we were considering using an OAK-D (or -LITE) on our robot next season. I was wondering if I could get access to the code for interfacing with the OAK camera?

jonathandao0 commented 2 years ago

Hi mailmindlin!

The OAK code we used can be found on this GitHub repository: https://github.com/jonathandao0/2022RapidReactDepthAI. This has everything required to run, including the trained ML model.

A lot of it can be run and tested on a local computer before putting it on a robot. A few modifications you will need to make are to update the device ID's to include your OAK's device to run (We did this since we had multiple OAK devices running off of a single Pi and had each dedicated to a particular function: intake and goal detection). You will also need to update the IP's for NetworkTables in order for it to properly send data to the robot.

To run it on a robot, we used a Raspberry Pi 4 running the WPILib Pi image. The Pi will need to be connected to the internet first in order to download the dependencies. The dependencies can be installed by ssh'ing into the pi and then running python3 -m pip install -r requirements.txt from the project directory. This may require you to force an ntp time update on the Pi first before pip will pull the dependencies.

To have the code automatically start up with the Pi, just replace the runCamera file from the WPILib Pi image with the one found in the repository. https://github.com/jonathandao0/2022RapidReactDepthAI/blob/main/wpilibpi-startup-scripts/goal-depth-detection-host/runCamera

mailmindlin commented 2 years ago

Thanks!

Just wondering, which models of OAK did you use for your robot? I'm mostly wondering if you used the fixed-focus or auto-focus variants because I read something about the AF cameras losing focus when vibrating.

jonathandao0 commented 2 years ago

For shooting/detecting the goal, you should probably use a fixed-focus version of the OAK if you plan on mounting it close to your flywheel/shooter. We also made a custom 3D printed mount that used these screws to isolate the camera from the flywheel. We tried a regular OAK-D model with auto-focus in the 2021 off-season for goal detection with it next to the flywheel and found that even if you configured the focus manually, the vibrations would move the camera module on the OAK device where it would no longer detect the goal.

For intaking/detecting game pieces, either model should work as long as you aren't putting it near anything that will expose it to a large anounnt of vibrations.

mailmindlin commented 1 year ago

Hey, sorry to bother you more, but my team just got our OAK-D S2 and were trying to replicate your work during the off season, and we have a few more questions (if you don't mind).

jonathandao0 commented 1 year ago

Hi mailmindlin!

The link in depthai-frc would have linked you to our 2020/2021 dataset. I've fixed the link, which you can also find here. As for our 2022 dataset, you can find it here. The data was a mix of publically avaiable images released by WPILib, team 1577's set of images for cargo, publically posted images from Week 0 events, images we generated from our own practice field, and a few images I captured of the official field at the Port Hueneme Week 1 event when I went there to volunteer.

The training parameters we used were a modified yolov3-tiny_obj.cfg from Darknet based on the number of classes we were labeling,

I plan on releasing a more detailed write-up of our experiences doing this in 2022, but the biggest takeaway we got was that detection of the goals in FRC games using ML is extremely difficult and time consuming, where you are probably better off using more traditional vision processing for this. With AprilTags being integrated into FRC going forward, this also makes it harder to justify the effort for this. Game objects should still be valid use case for using ML and should be less difficult to train and test a model for.