AprilRobotics / apriltag

AprilTag is a visual fiducial system popular for robotics research.
https://april.eecs.umich.edu/software/apriltag
Other
1.55k stars 532 forks source link

Drift of values in estimating pose of camera over time #143

Closed nityashukla closed 3 years ago

nityashukla commented 3 years ago

Firstly, my issue is I am getting drift in the Yaw,Pitch,Roll and X,Y,Z data while my system is steady. The drift increases with respect to time. My system is steady throughout but after an hour the readings are getting changed though the system is at the same place.

I am doing some simple mathematical calculations here to find Yaw, pitch, roll, and X, Y, Z.

pose_r is rotation matrix(array) of 3x3 elements , and pose_t translation matrix(array) of 3x1 elements.

pose_r = detected_parameters.pose_R pose_T = detected_parameters.pose_t r11 = pose_r[0][0] r12 = pose_r[0][1] r13 = pose_r[0][2] r21 = pose_r[1][0] r22 = pose_r[1][1] r23 = pose_r[1][2] r31 = pose_r[2][0] r32 = pose_r[2][1] r33 = pose_r[2][2]

AprilTagPitch = round(degrees(atan(-r31/sqrt((r32r32)+(r33r33)))),3) AprilTagRoll = round(degrees(atan(-r32/r33)),3) ApriTagYaw = round(degrees(atan(r21/r11)),3) AprilTagX = pose_T[0][0] AprilTagY = pose_T[1][0] AprilTagZ = pose_T[2][0]

Mathematical calculation reference: http://planning.cs.uiuc.edu/node103.html

Here is the screenshots of the observation,

XYZ YawPitchRoll

The data in X,Y,Z axis are in meters.

Can someone suggest a method to reduce or eliminate drift in the data.

mkrogius commented 3 years ago

I very much doubt the drift is caused by the apriltag code. I would suggest manually overlaying an image from the start of your test run and an image from the end of your test run and looking at what has changed in the image

nityashukla commented 3 years ago

Below is the code that we are using, have a look at it, suggest change if any or an alternative way to estimate pose avoiding the drifting in values.

`

from apriltags3 import Detector from cv2 import VideoCapture , cvtColor , COLOR_BGR2GRAY , imshow , waitKey , destroyAllWindows , imwrite , CAP_PROP_FRAME_WIDTH , CAP_PROP_FRAME_HEIGHT from math import atan , degrees , sqrt from time import time , sleep from os import system

Tag size

TagSize1 = 160 # tag size in mm TagSize0 = 0.160 # tag size in meters

camera_params1 = [1506.099,1508.936,963.116,481.425]

FrameWidth , FrameHeight = 1920 , 1080

sleep(2)

camera_index = 1

system('v4l2-ctl -d {} -c focus_auto=0'.format(camera_index)) system('v4l2-ctl -d {} -c focus_absolute=5'.format(camera_index))

cap = VideoCapture(camera_index)

cap.set(CAP_PROP_FRAME_WIDTH , FrameWidth) cap.set(CAP_PROP_FRAME_HEIGHT, FrameHeight)

at_detector = Detector(searchpath=['apriltags/lib', 'apriltags/lib64'], families='tag36h11', nthreads=1, quad_decimate=1.0, quad_sigma=0.0, refine_edges=1, decode_sharpening=0.25, debug=0)

def YawData(yaw,r11_1,r21_1): if r11_1 > 0 and r21_1 > 0:

print("1rd Quadrant")

    ApriTagYaw1 = yaw        
if r11_1 < 0 and r21_1 > 0:                    
    #print("2rd Quadrant")
    ApriTagYaw1 = 180 - abs(yaw)     
if  r11_1 < 0 and r21_1 < 0:
    #print("3rd Quadrant")
    ApriTagYaw1 = 180 + yaw                  
if  r11_1 > 0 and r21_1 < 0:
    #print("4th Quadrant")
    ApriTagYaw1 = 360 - abs(yaw)
return ApriTagYaw1

def AprilTagData(): while True :
try: ret, frame = cap.read() if ret: gray = cvtColor(frame, COLOR_BGR2GRAY) tags = at_detector.detect(gray, estimate_tag_pose=True, camera_params = camera_params1, tag_size=TagSize0) if len(tags) > 0: detected_parameters = tags[0] TAG_ID = detected_parameters.tag_id pose_r = detected_parameters.pose_R pose_T = detected_parameters.pose_t r11 = pose_r[0][0] r12 = pose_r[0][1] r13 = pose_r[0][2] r21 = pose_r[1][0] r22 = pose_r[1][1] r23 = pose_r[1][2] r31 = pose_r[2][0] r32 = pose_r[2][1] r33 = pose_r[2][2] yaw1 = degrees(atan(r21/r11)) ApriTagYaw = round(YawData(yaw1,r11,r21),3) AprilTagPitch = round(degrees(atan(-r31/sqrt((r32r32)+(r33r33)))),3) AprilTagRoll = round(degrees(atan(-r32/r33)),3)
AprilTagX = pose_T[0][0] AprilTagY = pose_T[1][0] AprilTagZ = pose_T[2][0] print("id = {} x = {} y = {} z = {} yaw = {} pitch = {} roll = {}".format(TAG_ID,AprilTagX,AprilTagY,AprilTagZ,ApriTagYaw,AprilTagPitch,AprilTagRoll)) except KeyboardInterrupt: break except Exception as e: print("____Apriltag Detection script exception = {}".format(e))
pass
AprilTagData()`

mkrogius commented 3 years ago

Please post the first and last image from your dataset. As I said, I very much doubt the problem is with the code.

nityashukla commented 3 years ago

First - before

Last - after

nityashukla commented 3 years ago

@mkrogius can you help it out in some way?? with some code or method??

mkrogius commented 3 years ago

Looking again at the graphs from your first post, it looks like the drift is approximately 2mm total, correct? Using the scale of the tag from your code of 160mm, that works out to a drift of ~6pixels. I don't actually see anything wrong with what apriltag is doing here, for the following reasons:

1) The image has actually shifted, by 1 or 2 pixels, as you can see by comparing the first and second images (up and to the right).

2) The tag appears especially blurry, more blurry than the text. I'm guessing that your photograph is of a scan of a tag being displayed on the computer screen. You can see this by looking at the blurriness of the tag edges, ~6 pixels.

So in conclusion, the problem is most likely with your dataset

nityashukla commented 3 years ago

Ok Noted, Thank you. Can you provide an alternative code to find out X,Y,Z, yaw, pitch, roll?

nityashukla commented 3 years ago

And i am detecting April Tag in live video streaming and my system is steady but as you can see from the chart given above there is continuous drift in readings and whenever i restart the code the original readings retains.

mkrogius commented 3 years ago

There isn't some alternative code. The problem, at least in the data you've shown me, is not with the code. If you want help with this issue, you will have to provide data that makes a more convincing case that the problem is with the apriltag code and not with your experimental setup.

nityashukla commented 3 years ago

okay, Noted.

nityashukla commented 3 years ago

Hey, Here is the link of the google drive,

https://drive.google.com/drive/folders/1hMR5CbS28Fixa3ddWcj_dIvXQA7QQ95J?usp=sharing

In which we have provided 1) Code for pose estimation 2) The dataset of the experiment that we recorded during the experiment. 3) Analysis to the experiment 4) Images of April Tag that is clicked at each minute.

Experiment - An April tag was placed in front of camera at certain distance and we recorded the pose values in two ways. 1 - Coordinates of April Tag wrt Camera 2 - Coordinates of Camera wrt April Tag (These coordinates are very important for our application and we are getting fluctuations as well as drift in these values. So please provide a suggestion to get accurate camera coordinates.)

We didn't see any change in the images that we provided (as our Experiment setup was stable throughout the experiment). Though we got drift in values of the pose.

Note: Experiment was conducted for approximately 7 hours. Our setup was stable you can verify it with images that we clicked at every minute.

Please refer the code, images, dataset and analysis and suggest a way to resolve this issue.

brmarkus commented 3 years ago

Comparing the first image with the last image (with a tool called "BeyondCompare") (a difference in time of appr. 7h) looks like the lightning has changed a lot: image

Instead of using a camera - could you use and just load only e.g. your first image (as a static, still image) and keep your loop and calculation running and check for drifts again, please? Just wondering whether the lightning (sun-rise? sun-set?) has an impact...

mkrogius commented 3 years ago

Thanks for adding the additional info, @nityashukla. This image of the tag has much crisper edges, which is probably why the drift is already better.

I think @brmarkus has a good idea for checking whether there is any drift directly caused by the apriltag code vs the lighting.

In my opinion, the lighting is the most likely cause. The total drift in y (from coords of AprilTag wrt Camera) is about 0.5mm, which, given that the tag is 160mm/235pixels, corresponds to a drift of 0.73 pixels. Any sub-pixel estimation is going to depend to some extent on the lighting of the scene, so this is a pretty reasonable amount of drift to be caused by lighting.

I should point out that this is already a super tiny amount of drift/fluctuation. How accurate does this need to be in the end?

If you want improved accuracy/reduced fluctuations, here are a number of suggestions in order of importance: 1) The easiest and most reliable way is to increase the size of the tag in the image, either by moving the tag closer to the camera, printing a bigger tag, using a higher-resolution camera, or swapping the camera lens. 2) The edges of the tag in your image are not totally straight. This could be due to the actual physical tag being curved or due to a slightly incorrect calibration of the radial distortion of your camera. Either of these will cause some problems. 3) Improve the image clarity, especially on the white-black borders. Your current images seem to be in focus (can't tell for sure, always worth improving), but they do look more than a little over-exposed. Also, there are also some weird artifacts at the white black border where the outermost black pixels are extra black compared to the rest of the black pixels. Your camera may be doing some processing/compression that is reducing the quality of the image. 4) Have you tried using the AprilTag PnP instead of the OpenCV version? I'm not sure which is more accurate, but if you are concerned with sub-millimeter accuracy then this may be worth investigating for you.

In summary, if you are going to be concerned about sub-pixel/sub-millimeter accuracy, you have to be concerned with a lot of different factors that affect the resulting image of the AprilTag

maharshi114 commented 3 years ago

@mkrogius, Solve PNP is implemented, you can find the code in Google drive link, Named as basic_ATDetection.

We are using solve PNP for camera pose estimation, among them only camera coordinate drafting more, not Tag coordinates.