RoboCup-SSL / ssl-vision

Shared Vision System For The RoboCup Small Size League
GNU General Public License v3.0
88 stars 109 forks source link

Tangential Distortion / More Radial Distortion coefficients #159

Open rolfvdhulst opened 4 years ago

rolfvdhulst commented 4 years ago

I have been looking at the class camera_calibration a lot lately and I was wondering what the reason was to go for a camera distortion model which only incorporates one degree of camera distortion.

Is there a good reason tangential distortion is ignored or was it simply found too computationally intensive / there was no time to work on it so far? Using one degree of radial distortion seems quite minimalistic too. I am just learning about how camera's and camera calibration works so I'd love some better explanations if you have any. Although the lower orders dominate, it could be possible that there is significant improvements here which would for example minimalize the error in calibration between two camera's at the middle line.

In particular because I see that in #148 there is talk about adjusting the distortion model to support negative distortion better, I thought it would be a good idea to also mention this possibility. It would not affect runtime performance significantly as finding the roots for the distortion function is only necessary during calibration and to visualize the calibration in the interface of SSL-vision.

You can view this as a 'feature request'

g3force commented 4 years ago

@rhololkeolke and I experiment with using the OpenCV mechanisms to use a chessboard for camera intrinsic calibration. It might be an alternative to #148.

@joydeep-b might know more about the history and decisions of the camera model. Keep in mind, that the code is quite old ;)

rolfvdhulst commented 4 years ago

Interesting! If you need any help, I'm enthousiastic to help with reviewing anything or building things. I could definitely see the chessboard working out; if you calibrate the camera and undistort the image using openCV,field calibration to find the position and orientation of the camera should become a lot more simple.

joydeep-b commented 4 years ago

Distortion models can get arbitrarily complex, including tangential distortions, cylindrical distortions, large FOVs, etc. However, in practice, the lenses and cameras that we use with ssl-vision do not exhibit these kinds of distortions. Or more precisely, the impact of correcting for these distortions, on the error in re-projection is negligible.

However, adding more complex distortion models (including negative radial distortion, see issue #148 ) significantly slows down the computation. For example, handling negative radial distortion will require solving a general form cubic equation rather than a special case, for every pixel being undistorted. This is why we add more complex distortion models only when needed.

If you have a system that does experience significant tangential distortion, you could share example images to help investigate the magnitude of the error. The need for supporting negative radial distortion is evidenced by the newer cameras, but at the moment we don't have evidence that tangential distortion is actually observed.

rolfvdhulst commented 4 years ago

After some digging I agree with you. My question came from a place of interest; I am building a simulator which uses real camera calibrations to compute pixel positions to forward to the user. I was simulating using the camera calibration from previous RoboCup, which gave problems because the calibration used at the Robocup 2019 was quite off near the boundaries of the simulated camera image because of #148, as the manual calibration is ok but not the best, giving a significant reprojection error. Since the error further away from the principal point as a result, I thought the problem was due to distortion coefficients. Robocup 2018 works just fine, so it's simply down to the automatic calibration not working.

Personally I am more concerned with the 'middle line' effect, where the calibrations of two overlapping camera's detects the robots in two distinct locations with some error between them (5-8 cm at previous Robocup) . I do not know however where this error originates from and how to reduce it effectively. If you could shed more light on this I'd be interested, but feel free to close this issue as my original question is solved.

g3force commented 4 years ago

My guess is, that the overlap is due to #148. As soon as we have a solution for it (either by implementing #148 or by using the calibration result from the chess pattern), we can check the overlap again.

rolfvdhulst commented 4 years ago

@g3force that should not actually solve the overlap problem as the overlap was also a problem with the in 2018 when there was heavy positive distortion on all camera's.

joydeep-b commented 4 years ago

Yes, agreed, that problem is related to the fact that single camera frames now cover a larger area of the field, and we do not get enough features across the image for a good calibration. It would definitely help to get more features near the centers of the image for calibration.

g3force commented 4 years ago

@rolfvdhulst the chess board calibration is now available in a first working draft: #163