hku-mars / livox_camera_calib

This repository is used for automatic calibration between high resolution LiDAR and camera in targetless scenes.
GNU General Public License v2.0
913 stars 207 forks source link

Point cloud shrinking during optimization phase with calibrated fisheye camera #37

Open VisionaryMind opened 2 years ago

VisionaryMind commented 2 years ago

Thank you for continuing to develop this project. It is extremely useful. Currently, this project does not support fisheye calibrations with OpenCV, so I rewrote the code to accommodate such a calibration using this camera model. Rough optimization works almost perfectly, generating a very close calibration:

image

Unfortunately, when the algorithm enters the optimization stage, it gradually shrinks the point cloud before finalizing the calibration. Here is a snapshot:

image

I wished to share a larger area of the screen, but OpenCV resizing is not working (nor is imwrite functional). If you would like for me to share our dataset, please let me know where to upload it. In the meantime, if you have any advice on how to prevent this "shrinking" from happening, please let me know.

I have seen this behavior before when using distortion coefficients and was able to clear it up by setting them to [0,0,0,0]. In this case, they are required to properly conform the points to the fisheye image. I have also tried undistorting the images, and the same shrinking behavior is seen.

VisionaryMind commented 2 years ago

I have been able to get this working by first undistorting the fisheye images and then running through the un-altered calibration algorithm (without fisheye-specific calls to projectpoints, for example). It works this way. Although, I would still like to know if it is possible to calibrate a distorted image directly.

HuangVictorAuto commented 2 years ago

@VisionaryMind , Hi, under what condition do you call your camera a fisheye camera? HOV>?. I am also trying to calibrate a HOV=120° camera, do i need to use a fisheye model? thanks!

VisionaryMind commented 2 years ago

@HuangVictorAuto, any lens with a FOV of between 100-180 counts as a fisheye and requires a special calibration. Our is a Rokinon lens with 180 FOV vertical and horizontal. The problem I am seeing with Livox calibration is not solved by using a fisheye model for point projection. In fact, you need to find the intrinsic matrix of your lens and undistort the images before attempting to calibrate them with the LiDAR. Procedures such as this work for that purpose.

I believe the problem with using undistorted images is that the Canny edge detection will have a lot of curves in it, especially at the boundaries of the image, and I think it confuses the algorithm this project is using. It probably can be resolved with a little clever programming, but for now, undistorting the images first before calibrating works perfectly fine.

HuangVictorAuto commented 2 years ago

@VisionaryMind , thanks for your reply. you are right, we can walk around the probem by undistort the image first. after checking the code, i only found that it only support k1,k2,p1,p2 four distortion_coeffs, not even k3 and also not for fish eye model.
The feeling for me,in order to use the code for good calirbration performance, we need to find a very good scene, which is also very diffuclt for us.

VisionaryMind commented 2 years ago

@HuangVictorAuto, we have experienced the same difficulty finding outdoor scenes. Auto garages work well if you get there early enough before cars are parked and there are sufficient columns (for depth variance). Something else that has worked for us is using rooms that have angled ceilings and running calibrations from various corners. This will help the Canny edge detection cover the entire sensor. The idea is to have multiple lines in the scene on more than one axis. The greater the depth in the scene, the more accurate the calibration will be for a wider array of use scenarios post-calibration.

You could potentially hack this by putting a large square object in the middle of a room at various positions and angles (similar to the ACSC method). I have actually done that as well with a large LED screen on a wall captured from multiple viewpoints (low / high, left / right). Once you have ~8 different angles, then run them through the single-scene calibration, one by one, to determine which are close enough to use for the multi-calib. If you go straight to multi-calib, you might include poor scene estimations in the final extrinsic calculation.

VisionaryMind commented 2 years ago

after checking the code, i only found that it only support k1,k2,p1,p2 four distortion_coeffs, not even k3 and also not for fish eye model.

If you are still interested in getting this to work with a fisheye, you will need to change the following lines of code in the lidar_camera_calib.hpp:

#include <opencv2/opencv.hpp>

float fx_, fy_, cx_, cy_, k1_, k2_, k3_, k4_, s_;

  k1_ = dist_coeffs_.at<double>(0, 0);
  k2_ = dist_coeffs_.at<double>(0, 1);
  //p1_ = dist_coeffs_.at<double>(0, 2);
  //p2_ = dist_coeffs_.at<double>(0, 3);
  k3_ = dist_coeffs_.at<double>(0, 2);
  k4_ = dist_coeffs_.at<double>(0, 3);

  cv::Mat distortion_coeff =
        (cv::Mat_<double>(1, 4) << k1_, k2_, k3_, k4_);

cv::fisheye::projectPoints(pts_3d, pts_2d, r_vec, t_vec, camera_matrix, distortion_coeff);

You can't use a fisheye distortion coefficient in the regular projectPoints method. The fisheye model has a different variable sequence. You will need to change the dist_coeffs and projectPoints statements in more than one place. After that, re-compile and you will be able to at least run a rough calibration with the fisheye lens. It will still fail on optimization.

aditdoshi333 commented 2 years ago

Hello @VisionaryMind,

I am facing a similar issue with a wide-angle lens, the point cloud kept on shrinking during the optimization phase. Can you please elaborate on how did you manage to run?

I understood that you rectified the input image but are there any code changes needed? Do I need to comment on any part of the code?

Any help is appreciated. Thank you