Open scottpantall opened 6 months ago
I am following these instructions (https://docs.wpilib.org/en/stable/docs/software/vision-processing/wpilibpi/using-a-coprocessor-for-vision-processing.html). I have the Pi running with a camera attached. I just need to get the python running.
I had the python running on my computer with python v3.10 (at least I'm pretty sure it was running) but when I upgraded my comptuer to use python v3.12, it made it so I can't install with pip
cuz reasons so I'm currently trying to figure that out.
The instructions I'm following have the Raspberry Pi connected to the RoboRio via ethernet cable, not USB, which concerns me. Should anyone have any resources on connecting the Pi to the Rio via USB, I'm all ears.
My issue was that the inference
module doesn't work with python v3.12 so I uninstalled it and used v3.11.8 which seems to be working: https://pypi.org/project/inference/
This confuses me since we have a poseEstimate.cpython-312.pyc
file which seems to tell me we used v3.12 to build that file but if this works, it works.
Good news! I can get it to attempt to run on the Pi.
Bad news! The Pi currently have access to the internet and can't just go get the inference
module. I'll have to play more tomorrow.
This might help but might be horrible since inference
has a LOT of dependencies... https://skylerh.com/how-to-install-python-modules-without-internet-access/#:~:text=Use%20Command%20Prompt%20To%20Install,having%20a%20working%20Python%20interpreter!
Definitely use Python 11 fo running on your own, because that is what is auto installed on the pi
Alright. 2 things to figure out first...
I can write to network tables in desktop simulation mode! I used this python package: https://pypi.org/project/pynetworktables/ It literally just writes the same values that it writes to the console.
We will need to read the robot positions from network tables (Code that has been written by @twisterjafla in a different branch for robot-comp-code-2024
) in order to use the toRobotPosit
to communicate as desired.
We should also be able to get the Raspberry Pi to run our note detection on startup by following these directions: https://learn.sparkfun.com/tutorials/how-to-run-a-raspberry-pi-program-on-startup#method-1-rclocal
The robot positions ("robotPositX", "robotPositY", "RobotRotation") are written from the semiAutoManager class (lines 62-64) and are written to the SmartDashboard.
We are able to read the robot positions from network tables! W00t!
I don't think we need to worry about checking USB ports for cameras but I'll make sure of that tomorrow. I am also unsure how @oof15642 was dealing with multiple notes.
I'll put up a PR for this branch so we can discuss it in the code.
I got the note detection to only care about 1 note! W00t!
We want note detection to work! What do we need to do?
From @oof15642
Here’s the code I used from ai for the index checking: