frc2399 / 2023-Season

Other
2 stars 2 forks source link

The power of apriltags in simulation #54

Open edf42001 opened 1 year ago

edf42001 commented 1 year ago

Last Saturday Anna L and Mahee made a great "point at target" command with photonlib vision sim. Here's the next step: the camera can't point directly at the target, because the camera is off to the side of the robot. And, the april tag is on the front of the cube node, so we actually want to aim behind the apriltag. So:

edf42001 commented 1 year ago

The first two bullets, were successfully completed a while ago. I wanted to follow up with more detail on the third. Consider this a tutorial for some things I discovered about photonlib's sim vision system!

The basics

First, lets verify the first two work by reenabling vision by uncomenting out lines https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/RobotContainer.java#L105 https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/RobotContainer.java#L144 https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/RobotContainer.java#L337 (you'll definitely want to reorganize this classes later)

and binding the aim command to a button, perhaps by replacing the engage command (or you could use a new button)

        new JoystickButton(xboxDriver, Button.kX.value).whileTrue(new CameraAimCmd(driveTrain, photonCamera));

Now when you run the sim, you can aim at the target in the bottom right (a residual from the example) https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L55-L56

robot_turning

Basic cleanup

You'll note that you can't aim at any of the apriltag targets (which I changed the display of to these nice blue rectangles) because we forgot to add those to the sim vision system (even though we added them to the field). To fix this, to start you can add to this for loop the following line: https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L74-L79

            simVision.addSimVisionTarget(new SimVisionTarget(fieldLayout.getTagPose(i + 1).get(), targetWidth, targetHeight, i + 1));

Now you can aim at targets. You'll note that it always seems to switch to one target. This is because the field of view is set super high, so it can almost always see all targets and the CameraAim command always picks the same target. Something to think about. Could aim at whichever target is more in the middle. Try lowering the FOV to 100. https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L25

Improvements

It turns out photonlib as a function to do what that for loop does for us. Here it is: https://github.com/PhotonVision/photonvision/blob/39f6ab8805252affe8ac49cf7eadd2ef9245621e/photon-lib/src/main/java/org/photonvision/SimVisionSystem.java#L130-L139

And that's a cool for loop they use, meaning we can replace our whole:

        // HERB WANTS TO COOK
        Pose2d poses [] = new Pose2d[8];
        for(int i = 0; i < 8; i++)
        {
            poses[i] = fieldLayout.getTagPose(i + 1).get().toPose2d();
            simVision.addSimVisionTarget(new SimVisionTarget(fieldLayout.getTagPose(i + 1).get(), targetWidth, targetHeight, i + 1));
        }           
        DriveTrain.field.getObject("april tag targets").setPoses(poses);

With

        // HERB WANTS TO COOK
        simVision.addVisionTargets(fieldLayout);

        // Add targets to our sim field
        Pose2d poses [] = new Pose2d[8];
        for (AprilTag tag : fieldLayout.getTags()) {
            poses[tag.ID - 1] = fieldLayout.getTagPose(tag.ID).get().toPose2d();
        }      
        DriveTrain.field.getObject("april tag targets").setPoses(poses);

Cleanup

Now, lets delete some out of date things to make it neater. All of the following can be removed

https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L45-L61 https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L68 https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L72

Also we're missing a newline here :) https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L80-L83

Much nicer

The good part

We've been assuming the camera is in the center of the robot, but this is not so, the camera is off to one side, so if we point the camera at the target, the robot is not pointed at the target :(. To simulate this, we must update our robotToCamera transform, which is the "Transform to move from the center of the robot to the camera's mount position" according to photonlib. I always mess up transforms, so it's important to double check your work. For example, if the camera is 20cm to the right of the robot, and the camera reads an apriltag is 3cm to the left, then it is 17cm to the right of the robot. Stuff like that.

Replace https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L39-L40 with

                    LimelightConstants.APRILTAG_ROBOT_TO_CAMERA,

, and then you can delete https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/subsystems/limelight/SimLimelight.java#L26-L27

But then here, https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/Constants.java#L91-L97 you can actually delete CAMERA_PITCH_RADIANS and CAMERA_HEIGHT_METERS and TARGET_HEIGHT_METERS, but transfer the 1.0 from camera height meters to replace the Units.inchesToMeters(0) of the transform 3d in APRILTAG_CAMERA_TO_ROBOT (replace with real camera height off ground later) (actually, it should be -1, because it's camera -> robot. I know because later I was looking at the z coodinate of a target, and it was way too high (1.7m) You know, it would probably be easier to define ROBOT_TO_CAMERA directly and have CAMERA_TO_ROBOT be the inverse.

Now, is this transform correct? We can check by visualizing it with photonvision sim field. image We can see the answer is "definitely not" (the second robot is the camera pose) (should change its display to something different). To fix this, replace https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/Constants.java#L93 with

            new Rotation3d(0, 0, 0)); 

image Thats better. Obviously, theres some stuff to figure out here. What should the rotation be? do we need a separate transform variable because most camera coordinate systems are rotated, like in this picture? https://www.mathworks.com/help/examples/ros/win64/AccessTheTfTransformationTreeInROSExample_01.png

Are the transforms correct or are they backwards? Transforms are confusing, it takes a while to get used to it. Regardless, now if you try to aim at a target, the camera points there but the robot does not. Cool!

A simple fix

First BTW you should fix the modAngle in the CameraAimCommand by replacing it with the new version. Now that we use this so much, you should probably make it a Utils function: https://github.com/frc2399/2023-Season/blob/146b37c32bba7004db68c31f7d01326b7017b9ae/src/main/java/frc/robot/commands/drivetrain/DriveStraightGivenDistance.java#L97-L101

First, right at the top of the for loop, add these two lines:

    if(pipelineResult.hasTargets())
    {
      Transform3d targetPoseInCameraFrame = pipelineResult.getBestTarget().getBestCameraToTarget();
      System.out.println("Initial pose: " + targetPoseInCameraFrame);

Play around and get a sense for the 3d position of the target. Also, you can remove the end condition from the command if you don't want it to keep ending and stop printing (this happened to me).

Next, transform the targetPos to be relative to the robot, like so:

      Transform3d targetPoseInCameraFrame = pipelineResult.getBestTarget().getBestCameraToTarget();
      System.out.println("Initial pose: " + targetPoseInCameraFrame);

      Transform3d targetPoseInRobotFrame = targetPoseInCameraFrame.plus(LimelightConstants.APRILTAG_CAMERA_TO_ROBOT);
      System.out.println("Transformed pose: " + targetPoseInRobotFrame);

Take a good long look at the differences, to see if they make sense. Use the photonvision field or draw a picture.

Then, replace the error line like so:

      double error = Math.atan2(targetPoseInRobotFrame.getY(), targetPoseInRobotFrame.getX());
      System.out.println("Angle error: " + error);

and there you go! It aims correctly again. Note that you could transform the pose more, for example, instead of pointing at the apriltag, it points behind the april tag, at the cube node, or to the side, at the cone node.

Also, this is still using the target's position directly. The error comes directly from the camera's measurement of the target. But the camera has a slow framerate. What is better, is to use the camera measurement to figure out where the robot is on the field. Then, you can use odometry and gyro to aim at a specific point. (That's what the pose estimator is for)

Let me know if they're any questions!