Open 12sliu opened 1 year ago
What sort of input are you thinking to tell the robot which stack to go to? Something like switching through the stacks using the arrow keys? Or just having one button per stack?
https://github.com/Brighton-FTC/2024/commit/4c0729144383eb468e370a6554b9233c62c62c6e
Ok, made (very sloppy and bad) prototype for alignment to pixels.
Basically, given a button press, it either goes to the left april tag and strafes right until the button is pressed again, or it goes to the right april tag and strafes left until that button is pressed again.
The point of this was mainly to learn about april tag detection, not for any practical use, and most of the code was taken from here.
I will be working on a better implementation.
My list of things to do:
According to the field setup guide (page 49), the pixel stacks are 11" apart, so I'll try to get the robot to drive to the correct pixel stack from that. I guess we'll have to use imperial units then...
Ok, made a few commits:
Made the robot (hopefully) drive straight to the desired pixel stack. Also made the class a linear opmode rather than an iterative opmode.
Basically, there is a "selected" pixel stack (0 to 5 inclusive), which can be changed with the dpad left and right buttons.
("Selected" pixel stack index shown in telemetry.)
Then, when X is pressed (idk what button that is on playstation), if there is an april tag with metadata detected, then it is meant to drive to that specific cone stack (emphasis on the meant to).
However, the driver needs to point the robot towards the left april tag for the left 3 pixel stacks, and the right april tag for the right 3, otherwise the robot will inevitably crash into a wall, so I'll probably look into ways of getting it to go to the correct pixel stack when the robot is oriented towards the wrong april tag.
At the moment, I'm not using either the distance sensor or the gyro, but I might try to implement those for greater accuracy.
List of things to do:
Code can be found here btw.
Here it says that the yaw given by the AprilTagPoseFtc
class is measured in degrees, while the offsets I was applying to my code were in inches. So, I suppose we could fine-tune the offsets to drive to the pixel stacks when we can test, but that would be time consuming and inelegant, so I'll try to figure out a way to get those offsets some other way.
https://github.com/Brighton-FTC/2024/commit/520dfd22edab566be3bd581bba07c86172bf0d0c
I decided to use the distance sensor, and the build in gyro (at least I think that there's a built in gyro) to a. rotate the robot so it's facing the correct stack of pixels and b. move forwards until the distance given by the distance sensor is low enough.
Also, I tried using closed loop control (as shown here, but idk how well it would work.
(Right now I'm just setting the different actuators to full power in the direction it needs to go, but I might look into having the power proportional to the error.)
Now, my main priority (apart from making the code slightly less terrible) is to write the code to actually get the robot to pick up a pixel from the desired stack.
As of how that's going to happen, I have absolutely no idea; I don't even know what the arm/linear slide/grabber do.
I was thinking for detecting where the stack actually is, I could maybe turn the robot a small amount in either direction and find where the distance was the lowest, and get that as the position of the pixel stack, but that would be slow and inefficient, so I'll see if there's a better way.
https://github.com/Brighton-FTC/2024/commit/6dde296494cd4fd4619c4707977ba5d2a3de5aa4
Made it (hopefully) go all the way to touching the required pixel stack.
Right now I'm using a distance sensor, a gyroscope, and a touch sensor.
In the EN it said that there will be a distance sensor and a touch sensor, and I think there is a gyroscope in the control hub.
My main priority now is writing the code to pick up a pixel.
Things to do:
https://github.com/Brighton-FTC/2024/commit/aec22a9e93134382c1756dc73b7f04b8d55ce461
Made code to (hopefully) pick up a pixel.
Also, made the class able to be called from an opmode, rather than it being an opmode in itself.
The thing is, I don't know anything about the arm/linear slide/grabber, so I'm sort of guessing what they do.
I suppose if the engineers finish the robot, I'll be able to look at it and change the code.
Right now the touch sensor is meant to be in the grabber or smth to tell when it can close.
I'll try to tidy up the code, add more javadoc, incorporate PSButtons, and do more research.
I hope the engineers don't decide to put the distance sensor "on the side"...'
EDIT:
I realized I did some really stupid stuff in my code, like implementing half a switch-case statement, and then doing the rest with if statements, so these are follow up commits that (hopefully) fixed those stupid mistakes:
https://github.com/Brighton-FTC/2024/commit/82c8bdd0cab9cb8330a154a55e5a602d2ee4a4c1 https://github.com/Brighton-FTC/2024/commit/13194cc9d4f12b19fbd1a79ec7fe31466b041804
https://github.com/Brighton-FTC/2024/commit/f4296a51097a1afe1cb6fc6a86910326a1828a86
Found out in game manual 2 (page 57), that the april tags for each side are always the same (I don't know why I wasn't expecting that), and so I made it so the robot will (hopefully) not crash into a wall if it is angled towards the wrong april tag, and instead rotate towards the correct one.
I also added a bit of javadoc, and changed some of the distances in the code.
I still have hardly any idea how the arm/linear slide/grabber work, so I'll try to get more information from the engineers and implement those.
(Actually I might just copy Steve's and Lawrence's code to do that.)
I'll also see if the engineers can put a touch sensor on the grabber, so I know when to stop putting the grabber down, and grab the top pixel.
(See my beautiful artistic drawing below of how I imagine the grabber to be like.)
It appears, that for accurate April tag pose estimation, the webcam may need to be calibrated (as stated here).
There are instructions on how to do this here, but I don’t know if we will have to do this, as the webcam that the engineers chose may have built in support, or the error may be small enough that it doesn't matter.
Either way, I thought I'd put it here before I forget about it.
Anyway, if the april tag thing fails, we could always just use Mo's idea about having three distance sensors; one at the front and one on either side.
https://github.com/Brighton-FTC/2024/commit/eada96d46ea6e0f9180cd6356076f122b7ca79bd
Added arm code and PID from @12sliu's code. I also did some PID things for the linear slide, but once the linear slide code is done then I'll probably just copy that.
The PID doesn't really exist yet, but if those values are found for the arm/linear slide, then I'll incorporate those into the PID calculator thing.
https://github.com/Brighton-FTC/2024/commit/00947b8ad52eb56a7913570c4f5b71ec785f0a16
Made the class an OpMode again, as @12sliu was saying something about how it was ok to have the class in that format.
Mo wants robot to go backwards after pixel is grabbed to avoid knocking over thing.
https://github.com/Brighton-FTC/2024/commit/dce9dca477003624da7639b4c46beda5beeb3c58
Ok I think I did that.
Need to convert into component class, make tester class for component, and have the class itself use component classes.
Ok. Ideally I'd like to have the component classes for the arm, grabber, and linear slide for that. Do you know when the linear slide component class will be completed? (The grabber and arm component classes have been written but not reviewed.)
Oh I need to review that, will be on todo list. Please still proceed with using arm and grabber components
Linear slide component class will be done by end of Thu latest.
https://github.com/Brighton-FTC/2024/commit/a903cb5b883fbf3168cd0db07b9f34216508036e
Made it into a component class and added functionality tester.
Does this still need modifications or is it ready to test?
Will be rewritten using RR.
https://docs.google.com/document/d/1cTIxikxVT6zcDwxJ5Thq7D4D-vLHuLN3-hTvNU1GLQo/edit#heading=h.diy47wtam215 Link to Google Doc explanation
Basically, robot detects April tag and realizes where it is, and then strafes (since this is mecanum) to align with the stack of pixels. Assumed robot is aligned towards stack of pixels.