Open GeorgeWang26 opened 2 years ago
You would need to undistort the image before the detection. OpenCV has functions for this. Be careful as the undistorted image has different camera matrix, so make sure you use the new undistorted camera parameters.
See this: https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html, in particular the getOptimalNewCameraMatrix, cv.initUndistortRectifyMap and cv.remap functions.
Sorry about the late follow up.
I looked at the CV docs, and came up with some code to undistort the image. Can you help me take a look to make sure that i got everything correct?
Specifically that after I got my new_camera_mtx, I should now use the new_camera_params with the undistorted image for Detector.detect() right?
Thank you so much!
class StreamSub(Node):
def __init__(self):
super().__init__('ros_to_cv')
self.subscription = self.create_subscription(Image, 'image', self.callback, 1)
self.at_detector = Detector(families='tag36h11',
nthreads=1,
quad_decimate=1.0,
quad_sigma=0.0,
refine_edges=1,
decode_sharpening=0.25,
debug=0)
self.camera_mtx = numpy.array([782.834096, 0.000000, 415.967175, 0.000000, 783.217076, 320.346941, 0.000000, 0.000000, 1.000000]).reshape((3,3))
self.camera_params = (self.camera_mtx[0,0], self.camera_mtx[1,1], self.camera_mtx[0,2], self.camera_mtx[1,2])
self.distorton = numpy.array([0.115288, -0.467219, 0.007459, 0.004567, 0.000000])
w = 800
h = 600
self.new_camera_mtx, self.roi = cv2.getOptimalNewCameraMatrix(self.camera_mtx, self.distorton, (w,h), 1, (w,h))
self.new_camera_params = (self.new_camera_mtx[0,0], self.new_camera_mtx[1,1], self.new_camera_mtx[0,2], self.new_camera_mtx[1,2])
self.tag_size = 0.166
def callback(self, data):
cv_bridge = CvBridge()
frame = cv_bridge.imgmsg_to_cv2(data, "bgr8")
# undistort
dst = cv2.undistort(frame, self.camera_mtx, self.distorton, None, self.new_camera_mtx)
# crop the image
x,y,w,h = self.roi
dst = dst[y:y+h, x:x+w]
gray_dst = cv2.cvtColor(dst, cv2.COLOR_BGR2GRAY)
# should I use new_camera_params or camera_params?
tags = self.at_detector.detect(gray_dst, True, self.new_camera_params, self.tag_size)
for tag in tags:
r = R.from_matrix(tag.pose_R)
euler = r.as_euler("xyz", degrees=True)
print("xyz", euler)
x = tag.pose_t[2][0]
y = -1 * tag.pose_t[0][0]
z = -1 * tag.pose_t[1][0]
print(x, y, z)
print("=====================")
cv2.waitKey(1)
I wanna know about the same question @GeorgeWang26 @AleksandarPetrov
I wanna know about the same question@GeorgeWang26 @AleksandarPetrov @imdafei
I wanna know about the same question @GeorgeWang26 @AleksandarPetrov
I want to use apriltag to do accurate pose estimation (both translation and orientation). It seems that only intrinsic matrix is used for pose estimation without distortion coefficients. Will the pose estimation lose accuracy? If so, is there a place where I can include distortion coefficients to rectify my image?