Closed mserour closed 3 years ago
The calibration needs to be done on an undistorted image. See below. You need to figure out how to undistort your specific camera image if the provided one does not work. No need to undistort the lidar points. https://github.com/acfr/cam_lidar_calibration/blob/e01d79fdcaebf31d634fc479f0e7e4775d87d502/src/feature_extractor.cpp#L581-L592
When I project into the image in the assess_calibration.cpp, I choose to leave the image distorted and instead distort the lidar points to be for a fisheye image. The alternate way to project lidar to image is to undistort the image and then project lidar points into the undistorted image.
If you can undistort your image, I think that the calibration result would look a lot better. We need to solve the PnP from the chessboard in the 3D world frame to the chessboard in the image plane. If the image plane is distorted, the transforms estimated by PnP will not be accurate. This carries on to the other aspects of calibration. There is the least distortion in the middle of the frame - that's probably why your calibration is best at its middle. If your chessboard is in the exact middle of your frame, the edges may still be distorted which may be why even your calibration at the middle is not so good.
Hello @darrenjkt
Great work on the tool it's very easy to interface with.
In case we publish undistored image from camera driver, what should be the value of camera_info topic in this case?
Also, my camera isn't fish-eye, it's pin-hole with 3 k distortion parameters and 2 tangential parameters and in the code snippet you attached you only un-distort the image in case of fish-eye camera not pin-hole camera so should I publish undistorted image form the camera driver and remove all distortion parameters from here
} else { // Pinhole model cv::solvePnP(corners_3d, corners, i_params.cameramat, i_params.distcoeff, rvec, tvec); cv::projectPoints(corners_3d, rvec, tvec, i_params.cameramat, i_params.distcoeff, inner_cbcorner_pixels); cv::projectPoints(board_corners_3d, rvec, tvec, i_params.cameramat, i_params.distcoeff, board_image_pixels); }
Also you use distortion parameters in the optimizer to calculate the projection errors should it be removed from there as well ?
I'll appreciate your feedback.
This codebase has really only been tested with a fisheye camera. I have not tested it on pinhole despite there being code for it. You could try publishing the undistorted image from the camera driver. That's the easiest way to do it in my opinion and hopefully it turns out good.
If you publish a rectified image, the distortion matrix in the camera_infos should also be all zeros. If the D matrix is all zeros then you can leave that line of code. Otherwise, simply feed in a zero matrix in place of the distcoeff.
Yes, you have to make sure the optimizer prrojection errors is also using a zero matrix for the distortion coefficients.
Thanks @darrenjkt for your feedback
I've tried to feed in zeros matrix in the distortion coefficients in this code snippet and in the optimizer using pin-hole camera model but the VOQ was very during the optimization ranging from 100 to 400, where could the distortion coeffs affect the optimization this severely ?
Also as you can see from the images I attached in the original post, the image is more distorted than the sample images you attached in the readme. So I tried to feed in the model to be fish-eye with the same intrinsic and distortion ceoffs from the pin-hole camera but the optimization resulted in very bad calibration worse than the attached images, Could you guide me how to use the package with same fisheye camera model?
I wonder also what did you use for intrinsic estimation ? ros calibration/Matlab ?
One last is it possible to feed initial estimation to the optimizer ?
Thank again
Sorry, it took me a while to get back; I must've missed this response and been a bit busy.
There is already an initial estimation for the rotation fed to the optimizer which was estimated using matrix equations. Theoretically, this should be all you need but because of sensor errors and other physical factors, just using the maths doesn't give a great result and hence we optimise with other metrics, using this as a starting point. https://github.com/acfr/cam_lidar_calibration/blob/e01d79fdcaebf31d634fc479f0e7e4775d87d502/src/optimiser.cpp#L356-L358 https://github.com/acfr/cam_lidar_calibration/blob/e01d79fdcaebf31d634fc479f0e7e4775d87d502/src/optimiser.cpp#L375
The initial estimation for translation is also fed to the optimizer using the centres of the boards in lidar and camera frames. See below. https://github.com/acfr/cam_lidar_calibration/blob/e01d79fdcaebf31d634fc479f0e7e4775d87d502/src/optimiser.cpp#L423-L430
I didn't do the intrinsic calibration for these cameras but I'm aware that opencv has a calibration function.
Hello,
I have been trying to calibrate VLP32 LIDAR with Camera using different tools and I have came across your paper and I have got the best results so far but I have one issue that has been noticeable using whatever calibration technique.
The calibration is at its' best on the middle of the image however, whenever it comes near to corner; the shift keeps increasing till it is very significant at the corners.
I have taken already a lot of samples at the corners and went through all the recommended steps but I am not sure what I might be missing. I have attached two images with the best results I have achieved so far from the tool taking into consideration that I commented the distortion part in the code from the point cloud so I am calibrating distorted images with distorted point cloud. There is a significant shift as shown in the Z direction.
https://ibb.co/K00Kb6N https://ibb.co/Y8XD170
Another question what is the procedure regarding the distortion? Is it un-distorted camera + un-distorted LIDAR OR distorted + distorted? When is the distortion taken into consideration at which point?. I have tried un-distorted camera + un-distorted LIDAR but the results I have got had relatively very large error.
Thanks in advance.