Closed AydenLo17 closed 9 months ago
I can take a shot at one. It should be very similar, if not the same as the Spark one, with just everything replaced with SparkFlex.
As far as your 45-degree angle, I'm not sure if you are using cancoders. You should tune them the same way as the Falcons using TunerX. Did you make sure that you changed the parameter in your gyro to false?
The spark max one is not configured for cancoders, and every time I've tried to convert it to them I've had issues. What parameter do I need to set to false?
I'll make a branch for you. It might be later tonight, though.
Thank you!
@AydenLo17 Here you go. I can't test it since I don't have that setup, but it's my best guess. https://github.com/Hemlock5712/AdvantageKitSwerveTemplate/tree/SparkFlex
So changes you will have to make.
GyroIOPigeon2(false)
is also falseThank you!
Did it work?
It does not work, the offsets are acting all weird. I calibrated it using the numbers from advantagescope and phoenix tuner and neither worked.
Did you reset to 0 before pulling the numbers from the Phoenix tuner?
No... lemme try that. What would I do without you ðŸ˜
So set your encoder to Rotation2d.fromRotations(0)
, then go from there.
Everything good! How do I switch it to 3 photonvision cameras, but 2 Raspberry Pi's?
It would seem that the photonvision thing isn't updating the pose thing. On advantagescope, there is no apriltagvision under real outputs
It was because my camera was in 2d not 3d. Is calibrating the cameras robotToCamera center of robot origin? Like 0, 0, 0 be middle of robot?
Yes that is correct. Middle of robot on the floor. Be careful some of the units are weird.
The robots pose keeps flying away to off the field, and will randomly come back to about the right area then go away again. Any ideas on why? I set the offset.
Hmmm. Sounds similar to the issues we had last year with PhotonVision (why we are planning on switching). You may want to reach out to them with help on calibration. Is it affecting your pose estimation a lot? It shouldn't affect it a ton with the logic that we are using.
Yeah, it makes the robot just disappear off the field. When i press the autoposereset button it kinda makes its way back over towards the vision pose estimation (the most current accurate robot pose), but then it just disappears and goes far away
Yeah sounds like a really bad calibration issue.
Which i thought i calibrated it properly, It wouldn't let me go into 3d mode until I calibrated the camera. On photonvision you can view a 3d thing similar to how you can on limelight, and it was pretty accurate. All of the detected April tags and their poses in meters were fairly accurate. Even when I wasn't using the camera, the poseestimation still freaked out and went far away.
Okay did you set your offsets in PhotonVision or in code?
Code, I do not believe you can set them in PhotonVision.
private static final Transform3d robotToCamera1 = new Transform3d(new Translation3d(0.25, 0.3, 0.2), new Rotation3d(0, Math.toRadians(20), 0));
aprilTagVision = new AprilTagVision(new AprilTagVisionIOPhotonVision("FrontCam", robotToCamera1));
I'm wondering if I got my offset directions wrong. in X, Y, Z, which directions would those be? Would front left for example be 1, 1, 0?
Can't remember if they do everything camera to robot or robot to camera though. It might be -1,-1,0. I thought last year it was Camera to Robot. I don't have my computer on me I will look tomorrow.
If you haven't read this this is a good short read. Doesn't still answer our question. https://docs.photonvision.org/en/latest/docs/apriltag-pipelines/multitag.html#enabling-multitag
Yeah, I did read it but I didn't try turning that on. Let me try that and if you could figure out the transformation stuff it would be cool to add a comment to the code. Just let me know, thank you!
Looks like 1,1,0 is front left.
Tomorrow I would throw a negative on the 20 if you haven't tried that.
I don't think it's a photonvision thing. I put the limelight's back on it and set them up properly, but the poseestimate kept flying off the screen
Could you send me an image of your setup (where the camera is mounted on the robot, what is the front of your robot, and the tags you are trying to detect? So my gut is telling me that you are potentially doing multi-tag tracking, and potentially you have them flipped.
Try potentially one tag.
Yes I was trying to detect the 2 on the speaker, red alliance.
Do you have four on the left side of three?
We have the April tags in the correct position. On limelights 3d robot in field pose view, Its location is fine.
Ignore the janky mounts for the limelights, we're still trying to figure out where to put them on final robot
Front is the side furthest away. Back is closest to camera
Okay, your cameras are upside down. Flip, and then your Transform3d should be for the left camera
private static final Transform3d robotToCamera1 = new Transform3d(new Translation3d(-0.3, 0.25, 0), new Rotation3d(Math.toRadians(135), Math.toRadians(20), 0));
I might not be right. Doing transforms in my head is hard but I would start with that.
I flipped the camera in limelight's interface as well as set the offsets in there.
If you flipped the camera that's good. Did you set them in the limelight like how I had it, especially with the rotation of 135? Maybe send a snippet of what you set everything to in Limelight?
Yes I set it with the rotation. You can view a pose of the limelight on the robot and all 3 were set properly. But, they looked right side up in that 3d view. I flipped the camera though so it is fine..?
Okay, so you should be fine if they look right in the 3d view. Here is what I would start with
If you have checked all of these and it still isn't working let me know. I have some time tomorrow and Friday. Maybe we can set up a virtual call.
Make sure my motors are set to the back? I don't understand this
Not motors.... Limelights... Sorry long day. So "make sure your Limelights are actually on the back of your robot in the GUI not on the front of your robot"
I gotcha, yeah it has been a long day. I'm sorry for asking for so much help I'm just trying to get code done by the time the alphabot is done so we can do real testing.
I'm really hoping I can figure out the photonvision thing because it can detect the apriltags from 2x further than the limelights can. At least with my testing it can.
I don't know if I told you this but using photomvision and using the estimated robot pose on AScope from just vision (like visionestimatedpose2d) or something, the robot was near perfect on the field. But when using the pose estimation as the "robots location" it keeps zooming away.
Hmmm, that is very odd. We will be testing everything on Saturday. I'll let you know if we find out anything. Can you send me the log file from the robot? A suitable log file would start with the robot facing away from the tags and then turning it to see the tags. You could also send me your code so I can try to fix it; otherwise, looking at the log file, I can hopefully see the issue.
I'm going to add you to our repository.
Perfect. If you could send me a log file of the test I mentioned that would be super useful. Then I can run it in Replay.
We are having a problem with zeroing our swerve modules. We use the SDS MK4 with 2 Falcon 500s on it. When we point all the wheels forward, and bevel gear out, then we take the absolute angle value from Advantagescope and put that into the offset on DriveConstants. Once we start driving the robot on stands, the wheels all point in different directions and spin in different directions