BotDogs4645 / SY2023-CORE-A

Robot programming for CORE A
Other
1 stars 1 forks source link

Autobalance using pitch/roll - Autonomous Specific #13

Open davidmuchow opened 1 year ago

davidmuchow commented 1 year ago

First of all, because of varying robot orientation, the pitch/roll of the robot is ambigious. That does NOT mean that the variable is the same between the two of them. We can use yaw angle to determine whether pitch or roll is what we're going to PIDCommand

We are going to use a ProfiledPIDController for controlling the tiny, tiny adjustments. It will either control the X or the Y and NOT both. We cannot be moving lengthwise on the charging station.

We also need to discuss at what point people press the button in endgame specifically.

Auton is a whole other beast. I think that pid should be activated right before it enters the platform. Then, pid can be activated. However, if it does it before it enters the platform, it will not begin balancing because the pitch/roll & their rates are 0. It will think it's already balanced. Here is what I'm thinking command structure wise:

davidmuchow commented 1 year ago

The slow move --> autobalance PID is something that can also be assimilated into teleop.

davidmuchow commented 1 year ago

Let me know what you guys think @BotDogs4645/core-a

camden-git commented 1 year ago
davidmuchow commented 1 year ago

315 - 45: PITCH - = TOWARD OPPOSITE ENTRANCE POINT 45 - 135: ROLL + = TOWARD OPPOSITE ENTRANCE POINT 135 - 225: PITCH + = TOWARD OPPOSITE ENTRANCE POINT 225 - 315: ROLL - = TOWARD OPPOSITE ENTRANCE POINT

davidmuchow commented 1 year ago

wrapped gyro heading = pitch/roll +-

davidmuchow commented 1 year ago
  • we could find exactly where the item is pretty well with ether a color sensor or another camera on the claw, honestly just doing openCV would probably work better. Maybe we get a generic camera flash to help isolate edges?

    • we can also check amps on whatever gripper we use to see if we actually have something?
  • if the robot is not a square we should make the longer side point into the ramp to allow more space for the other robots, if not then yeah everything looks good

I do think that OpenCV might be an option, but we have to consider on what variables to move via OpenCV. We can calculate the angle change between the camera's front facing Pose and the ball's center position in terms of pixels. There is some math there that might exist in PhotonLib that we can use to estimate the balls pose using the camera.

Also, operating that with the driver camera, LL, and the apriltag camera is four cameras. Do we really want/need that much complexity?

camden-git commented 1 year ago

Your idea for positioning the arm is good, I was kind of thinking we move it down until only 4 corners are visible and then move it forward, but if we aren't viewing it perfectly straight on then that wouldn't work. As for the camera count, I'm not sure if there's anything better we can do without cutting features.

davidmuchow commented 1 year ago

image

2 degree of freedom PID; which is required for this problem