HULKs / hulk

All your base are belong to us!
https://hulks.de/hulk
GNU General Public License v3.0
49 stars 49 forks source link

Pixel to Robot transformation is not correct at extreme head joint positions #335

Open tuxbotix opened 1 year ago

tuxbotix commented 1 year ago

From recent and past experiences back to 2018, there are noticeable inaccuracies of ball, field line, etc position perception obtained by pixel-to-robot projection when the head has yaw or pitch movements towards their extremes.

This means our kinematic chain has some inaccuracies not observed when the head pitch/yaw is near zero.

Latest observations:

Maxi found out that when the robot is static (no walking, just standing) and the ball is at its feet but the head moves (look around), the ball's position relative to the robot keeps oscillating. The below image is the perceived ball position in robot coordinates. The ball supposedly moves +-20cm~ while the robot and the ball are stationary.

image

In addition, the ball model is very jumpy/ noisy, which makes things even worse.

Mathematical representation:

image

Possible problem points:

Camera to Head transformation given by the manufacturer is not correct

The position of neck relative to camera (camera to head) maybe wrong -> this is denoted as camera2headUncalib, these values are provided by Aldebaran. There were also some discussions between some of our teams when the V6 arrived on this topic.

Extrinsic modelling is not complete enough to catch significant position shifts

Extrinsic compensation is applied in the form of rotation matrix (Rext), it may not be sufficient, might need the complete transformation with both rotation and position. the position becomes important esp with the above case of manufacturer provided values are not correct enough.

Intrinsic modelling is too simple

We just use the basic pinhole projection model, we do not account for any radial or other distortions. These distortions also will naturally affect the 3d perception using the camera, especially at longer distances.

Investigation plan:

  1. Get a robot that shows these symptoms and do intrinsic calibration with Python + OpenCV or matlab calibration toolkit, etc.
    • If we see significant distortion values, then our intrinsic modelling is not good enough and we are in trouble xD
  2. Do extrinsic calibration with such a toolkit + twix or whatever image _ joint/ camera matrix dumping.
    • Use our old calibration stand as it can give a repeatable position
    • The ground to camera matrix will be position and rotation. Using the recorded ground_to_head, etc extract head_to_camera_calibrated and analyze if there are translations beyond what is given by the manufacturer.
      1. Maybe spin as own issue -> improve ball model so it is not jumpy
oleflb commented 10 months ago

What is the state here? Is this still in progress?