frc538 / 2020-infinite-recharge

Robot code for FRC Team 538.
0 stars 0 forks source link

Read the docs. #2

Closed drewwhis closed 4 years ago

drewwhis commented 4 years ago

I think the best course of action is to use GRIP, but I could definitely be convinced otherwise. I also think our ability to do anything with vision is limited by how much of a functional robot and accurate field we have (but again, I'm open to other interpretations).

Vision docs: https://docs.wpilib.org/en/latest/docs/software/vision-processing/index.html

GRIP docs: https://docs.wpilib.org/en/latest/docs/software/vision-processing/grip/index.html

WPILib docs: https://docs.wpilib.org

HeathHudson commented 4 years ago

The ability to do vision based on the two criteria that you gave is probably going to hinge more on the lack of quality of a field that we have. I do not feel like the quality of the robot will be an issue (fingers crossed), but with the quality of cameras that we currently have I feel like the vision of the field that we will have is potentially going to be an issue.

drewwhis commented 4 years ago

@HeathHudson and I messaged a little bit about this, but for transparency:

Our cameras should be fine for vision processing (as long as we can supply a light source). The quality of the image appears low to a driver due to transmitting the image from the camera, to the Pi, to the roboRIO, to the Driver Station intentionally reducing the quality so we can stream video to the Operator Console to provide driver vision. If we're processing the image on the Pi and sending the results to the network tables, image quality won't be an issue there. We don't have to reduce the image quality if the Pi is processing the image and sending results (and we're not using that camera to stream to the driver).

If we need to improve driver vision, that's a different story (no pun intended).

We can talk about this on Monday if we need more clarity.