Elucidation / StereoColorTracking

3D point tracking using stereo camera with color thresholding and disparity
27 stars 19 forks source link

More Info on finding the projection matrices #3

Open originholic opened 9 years ago

originholic commented 9 years ago

Hello, I am glad to find this project, finally have some hints to start my similar project! I basically setup the stereo camera and ROS as described in the README file, also calibrated the camera as well, I replaced the calibrated yaml file for ROS camera driver and the camera K matrices in the disparity_track.py.

However I couldn't get accurate tracking when I rosrun the disparity_track.py file, didn't matter how I changed the disparity ratio, the tracking marker was wobbling around when I moved the object.

I am wondering that in the issue #1, you mentioned that you manually tweak the projection matrices, can you possibly provide some more information about how you achieve the tuning process of the camera parameters? is there any camera setting I need to change as well in the codes?

Elucidation commented 9 years ago

Could you show your calibrated yaml file and camera K matrices here?

If I remember right (it's been a while), the issue was the calibrated yaml and/or camera K matrices do not have the correct x & y offset to the center of the camera, since we know the camera resolution we can just manually provide the x offset = width/2 and y offset = height/2 in the K matrix like in image_track.py#44 (in this case a 640x480 image):

    self.center_x = (640.0/2.0) # half x pixels
    self.center_y = (480.0/2.0) # half y pixels
    self.Kinv = np.matrix([[861.7849547322785, 0, 320.0], 
                           [0, 848.6931340450212, 240.0],
                           [0, 0, 1]]).I # K inverse

The values in the k matrix are

self.Kinv = np.matrix([[focal_length_x, 0,width/2], 
                       [0, focal_length_y, height/2],
                       [0, 0, 1]]).I # K inverse

Similarly, I think I modified the yaml files for both cameras like left_camera.yaml#L7 to have the correct offset.

camera_matrix:
  rows: 3
  cols: 3
  data: [861.7849547322785, 0, 320.0, 0, 848.6931340450212, 240.0, 0, 0, 1]

Here -> [861.7849547322785, 0, 320.0, 0, 848.6931340450212, 240.0, 0, 0, 1]

I can't remember if I also modified the focal length, probably kept it same from the calibration step.

originholic commented 9 years ago

Hello, many thanks for pointing it out. Indeed, I didn't change the x/y offset of the K matrix in .yaml as well as in the disparity_track.py. Following is the contents for left camera:

image_width: 640
image_height: 480
camera_name: narrow_stereo
camera_matrix:
  rows: 3
  cols: 3
  data: [252.729874, 0, 335.235743, 0, 254.404887, 250.742853, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.045236, -0.00208, -0.000102, 0.000641, 0]
rectification_matrix:
  rows: 3
  cols: 3
  data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
  rows: 3
  cols: 4
  data: [227.225693, 0, 340.582834, 0, 0, 241.936142, 251.513529, 0, 0, 0, 1, 0]

I haven't got my hardware setup close to me right now, I will try it later. Did you change the x/y offset? of projection_matrix in the left_camera.yaml file as well?

By the way, in the python scripts, only the disparity_track.py is needed for my project, from my understanding, the "disparity_track.py" is used for tracking the object's x/y position including depth info, please correct me if I am wrong, I assume you used the calibrated x/y focal length value in the K matrix:

self.Kinv = np.matrix([[861.7849547322785, 0, 320.0], 
                       [0, 848.6931340450212, 240.0],
                       [0, 0, 1]]).I

but in the "image_track.py", the x/y focal length are all set to 640, is there any special reason for this?

Thanks a lot