SokratG / Surround-View

MIT License
149 stars 48 forks source link

SVAutoCalib , Four cameras were calibrated separately without joint calibration #5

Closed fanDongHust closed 2 years ago

fanDongHust commented 2 years ago

Four cameras were calibrated separately without joint calibration, so getExtTranslation() is not useful.

Why not consider putting 4 cameras in the same world coordinate system to jointly calibrate the external parameters and then stich with the calibrated extrinsic parameters?

SokratG commented 2 years ago

About translation - in this current case it's really not useful, because it's considering the stitching as a panorama(only rotation from one optical centre). The autocalibration method provided by OpenCV for stitching doesn't take into account the translation. But this can be taken into account, i.e. if somebody write own extrinsic parameter calibration(Bundle Adjustment) use optimization solver and for this reason I remain a translation parameter. The classes in modules have API and you can implement your own algorithms for solving different tasks and put them into project for getting may be better result.

About joint calibration - inside OpenCV the Bundle Adjustment(BA) uses the joint parameter refinement(aka calibration). The SVAutoCalib has follow steps: 1) Сalculate the homography of the cameras each with each other extract intr. and extr. parameters(all of this inside OpenCV, for detail see there and OpenCV source code) 2) Then, using BA for refine parameters the all cameras jointly calibrated, solving optimization reprojection error.

So they are jointly calibrated, if I understood correctly what you meant.

fanDongHust commented 2 years ago

Thanks for your reply. I noticed that your four cameras are set up very close together, they are not Installed on the front and rear of the real vehicle.

The AVM system consists of four cameras located at the centers of the front and rear bumpers and under two side view mirrors as shown in Figure 1 with red triangles : image

Whether SVAutoCalib is applicable in this case??

SokratG commented 2 years ago

Thanks for your reply. I noticed that your four cameras are set up very close together, they are not Installed on the front and rear of the real vehicle.

The AVM system consists of four cameras located at the centers of the front and rear bumpers and under two side view mirrors as shown in Figure 1 with red triangles : image

Whether SVAutoCalib is applicable in this case??

The essential of the software computational extrinsic parameters(that's mean relationship between cameras(rotation and translation)), is a find common points on camera images, which will be used to solve optimization problems. It depends on how much the image area overlaps between adjacent cameras, for finding feature points(manual or autodetection algorithm like ORB/SIFT). This directly depends on how the cameras are set close or far, but on the software computation impact only overlapping area. And of course the distortion of fisheye lens has impact of this common points, when you will minimize the reprojection error and you must undistort this effect by single calibration of each camera. So SVAutoCalib can calibrate cameras for this case, but if you have 4 cameras it waits 5 images(for detail see https://github.com/SokratG/Surround-View/issues/2#issuecomment-1073278617)

fanDongHust commented 2 years ago

Thank you for your patient and detailed explanation!

I cannot get the calibrated extrinsics by SVAutoCalib. computeImageFeaturesAndMatches_ always return false , (*matcher)(features, pairwise_matches); features[i].size != 0 (features[0].size=1331, features[1].size=266, features[2].size=656, features[3].size=2500) pairwise_matches[i]. size =0 .

My four fisheye diagrams are: 000596F F 000596L L 000596B B 000596R R

image

If I take a picture of a checkerboard, I get the same result.

Fisheye diagram distortion removal code is as follows:

The internal and distortion parameters of my fisheye camera are,

void initializeK(Eigen::Matrix3f& K_F, Eigen::Matrix3f& K_L, Eigen::Matrix3f& K_B, Eigen::Matrix3f& K_R) { K_F<<422.13163849, 0.00000000, 612.82890504, 0.00000000, 421.10340889, 545.05656249, 0.00000000, 0.00000000, 1.00000000;

K_L<<420.60079305,   0.00000000, 650.54173853,
       0.00000000, 418.94827188, 527.27178143,
       0.00000000,   0.00000000,   1.00000000;

K_B<<422.61569350,   0.00000000, 632.46019501,
       0.00000000, 421.27373079, 548.34673288,
       0.00000000,   0.00000000,   1.00000000;

K_R<<421.64203585,   0.00000000, 640.09362064,
       0.00000000, 420.26647020, 529.05566315,
       0.00000000,   0.00000000,   1.00000000;

return;

}

void initializeD(Eigen::Vector4f& D_F, Eigen::Vector4f& D_L, Eigen::Vector4f& D_B, Eigen::Vector4f& D_R) { D_F<<-0.07031853, 0.00387505, -0.00333139, 0.00056406; D_L<<-6.58382798e-02, -2.00728513e-03, -3.72535694e-04, 1.81851668e-06; D_B<<-0.06553861, -0.00094857, -0.00150748, 0.000325; D_R<<-0.07289752, 0.01254629, -0.01300477, 0.00361266;

return;

}

Do I have a problem with the distortion process?

fisheye distortion code as follows: `void undistortFisheye(cv::Mat image_in, cv::Mat camera_intrinsic_matrix, cv::Vec4d distort_coeff_matrix, cv::Mat& image_out) { cv::Size img_size = image_in.size(); //已知相机内参和畸变系数 //we should know whether camera_intrinsic_matrix need be update by cv::fisheye::estimateNewCameraMatrixForUndistortRectify or not

//step1.估计新矩阵

//if camera_intrinsic_matrix is estimateNewCameraMatrixForUndistortRectify, then it can be used directly in param 5.
//if camera_intrinsic_matrix is original, then param 5 should be set with cv::Mat newCamMat, which is not initialized.
// 估计新的相机内参矩阵,无畸变后的
 cv::Mat newCamMat;
 cv::fisheye::estimateNewCameraMatrixForUndistortRectify(
     camera_intrinsic_matrix, distort_coeff_matrix, img_size,
     cv::Mat::eye(3,3,CV_32FC1), newCamMat, 1);//cv::Matx33d::eye()

//cv::Mat newCamMat = camera_intrinsic_matrix;
//newCamMat.at<float>(0, 2) = image_in.cols / 2;
//newCamMat.at<float>(1, 2) = image_in.rows / 2;

//cv::Mat newCamMat = cv::getOptimalNewCameraMatrix(camera_intrinsic_matrix, distort_coeff_matrix, img_size, 1, img_size, 0);

//method 1 //step2.计算map1,map2 cv::Mat map1, map2; cv::fisheye::initUndistortRectifyMap( camera_intrinsic_matrix, distort_coeff_matrix, cv::Mat::eye(3, 3, CV_32FC1), newCamMat, img_size,//cv::Matx33d::eye() camera_intrinsic_matrix CV_32FC1, map1, map2);//

// step3.remap图像去畸变
cv::Mat correct_image;
cv::remap(image_in, correct_image, map1, map2, cv::INTER_LINEAR);

//method 2
    //step4. undistortImage图像去畸变

cv::Mat undistort_im;
cv::fisheye::undistortImage(
    image_in, undistort_im,
    camera_intrinsic_matrix,
    distort_coeff_matrix,
    camera_intrinsic_matrix,//newCamMat,//
    image_in.size()
);

cv::imshow("newCamMat", correct_image);
cv::imshow("camera_intrinsic_matrix", undistort_im);
image_out = undistort_im;//correct_image;
cv::waitKey(0);

}`

defish result is as follows:

defish_F defish_F defish_L defish_L defish_B defish_B defish_R defish_R

image

Thank you again.

SokratG commented 2 years ago

Thank you for your patient and detailed explanation!

I cannot get the calibrated extrinsics by SVAutoCalib. computeImageFeaturesAndMatches_ always return false , (*matcher)(features, pairwise_matches); features[i].size != 0 (features[0].size=1331, features[1].size=266, features[2].size=656, features[3].size=2500) pairwise_matches[i]. size =0 .

My four fisheye diagrams are: 000596F F 000596L L 000596B B 000596R R

image

If I take a picture of a checkerboard, I get the same result.

Fisheye diagram distortion removal code is as follows:

The internal and distortion parameters of my fisheye camera are,

void initializeK(Eigen::Matrix3f& K_F, Eigen::Matrix3f& K_L, Eigen::Matrix3f& K_B, Eigen::Matrix3f& K_R) { K_F<<422.13163849, 0.00000000, 612.82890504, 0.00000000, 421.10340889, 545.05656249, 0.00000000, 0.00000000, 1.00000000;

K_L<<420.60079305,   0.00000000, 650.54173853,
     0.00000000, 418.94827188, 527.27178143,
     0.00000000,   0.00000000,   1.00000000;

K_B<<422.61569350,   0.00000000, 632.46019501,
     0.00000000, 421.27373079, 548.34673288,
     0.00000000,   0.00000000,   1.00000000;

K_R<<421.64203585,   0.00000000, 640.09362064,
     0.00000000, 420.26647020, 529.05566315,
     0.00000000,   0.00000000,   1.00000000;

return;

}

void initializeD(Eigen::Vector4f& D_F, Eigen::Vector4f& D_L, Eigen::Vector4f& D_B, Eigen::Vector4f& D_R) { D_F<<-0.07031853, 0.00387505, -0.00333139, 0.00056406; D_L<<-6.58382798e-02, -2.00728513e-03, -3.72535694e-04, 1.81851668e-06; D_B<<-0.06553861, -0.00094857, -0.00150748, 0.000325; D_R<<-0.07289752, 0.01254629, -0.01300477, 0.00361266;

return;

}

Do I have a problem with the distortion process?

fisheye distortion code as follows: `void undistortFisheye(cv::Mat image_in, cv::Mat camera_intrinsic_matrix, cv::Vec4d distort_coeff_matrix, cv::Mat& image_out) { cv::Size img_size = image_in.size(); //已知相机内参和畸变系数 //we should know whether camera_intrinsic_matrix need be update by cv::fisheye::estimateNewCameraMatrixForUndistortRectify or not

//step1.估计新矩阵

//if camera_intrinsic_matrix is estimateNewCameraMatrixForUndistortRectify, then it can be used directly in param 5.
//if camera_intrinsic_matrix is original, then param 5 should be set with cv::Mat newCamMat, which is not initialized.
// 估计新的相机内参矩阵,无畸变后的
 cv::Mat newCamMat;
 cv::fisheye::estimateNewCameraMatrixForUndistortRectify(
     camera_intrinsic_matrix, distort_coeff_matrix, img_size,
     cv::Mat::eye(3,3,CV_32FC1), newCamMat, 1);//cv::Matx33d::eye()

//cv::Mat newCamMat = camera_intrinsic_matrix;
//newCamMat.at<float>(0, 2) = image_in.cols / 2;
//newCamMat.at<float>(1, 2) = image_in.rows / 2;

//cv::Mat newCamMat = cv::getOptimalNewCameraMatrix(camera_intrinsic_matrix, distort_coeff_matrix, img_size, 1, img_size, 0);

//method 1 //step2.计算map1,map2 cv::Mat map1, map2; cv::fisheye::initUndistortRectifyMap( camera_intrinsic_matrix, distort_coeff_matrix, cv::Mat::eye(3, 3, CV_32FC1), newCamMat, img_size,//cv::Matx33d::eye() camera_intrinsic_matrix CV_32FC1, map1, map2);//

// step3.remap图像去畸变
cv::Mat correct_image;
cv::remap(image_in, correct_image, map1, map2, cv::INTER_LINEAR);

//method 2
  //step4. undistortImage图像去畸变

cv::Mat undistort_im;
cv::fisheye::undistortImage(
  image_in, undistort_im,
  camera_intrinsic_matrix,
  distort_coeff_matrix,
  camera_intrinsic_matrix,//newCamMat,//
  image_in.size()
);

cv::imshow("newCamMat", correct_image);
cv::imshow("camera_intrinsic_matrix", undistort_im);
image_out = undistort_im;//correct_image;
cv::waitKey(0);

}`

defish result is as follows:

defish_F defish_F defish_L defish_L defish_B defish_B defish_R defish_R

image

Thank you again.

Hmm... You're cropping so many areas of an images when undistort. But problems in feature detection, I guess. The keypoints of your overlapping region has a bad descriptors, because it's mostly monotone texture. To get a good feature descriptors I use ArUco board on every overlapping region of the cameras(see figure below), that's can be helpful for extrinsic calibration. CamFeature

Also in code I use ORB feature detector and I longly setuped the parameters of this feature detector. You can use another feature detector, e.g. BRISK or SIFT for first calibration process, that's may be help.

fanDongHust commented 2 years ago

Adjacent cameras you used can capture the public area because they are installed in close proximity. If the four cameras you used attached to a real vehicle, adjacent cameras will not be able to capture the same area. What is FOV of the camera you are using?

SokratG commented 2 years ago

Adjacent cameras you used can capture the public area because they are installed in close proximity. If the four cameras you used attached to a real vehicle, adjacent cameras will not be able to capture the same area. What is FOV of the camera you are using?

L F R
Lt Ft Rt

So, in your examples, I see overlapping region of adjacent cameras. But as I said, the problem is high distortion due to the fisheye lens. When you unwarp and crop you lose much information from image. For get a good calibration parameters you must use some calibration pattern on the overlapping region. See example - they have calibration pattern on the overlapping region of adjacent cameras and marked the angle point for compute camera parameters. But in the case of my code the points are automatically detected by feature detectors(ORB). So you should more research area of the calibration multi-camera system with fisheye lens(exactly how unwarp and detect common points without or less image cropping).

FOV of my lens is 120°. It's a small field of view, and the room where I calibrated the cameras is also small. So that's reason why I place the cameras is so close. But in outdoor I can place them how in real car and calibrate without problem.

fanDongHust commented 2 years ago

Thank you very much.

As you say, But in outdoor I can place them how in real car and calibrate without problem.

I want to know, When outdoors, where are the cameras installed on the car? Are they farther apart?

someone else (https://github.com/fanDongHust/ROECS_DF/blob/main/shao2019.pdf) correct extrinsic parameters : Relative motion between front and left cameras TLF can be estimated by epipolar constraint using a chessboard placed in the common field of view.

But I prefer to use your method. But I have problems with your method, especially how to undistort fisheye ideally.

After the fisheye image was undistorted, the overlapping area of the adjacent cameras was stretched so much that I cut it out. Because the camera in front of the car looks forward and down, and the camera on the left of the car looks left and down, the two cameras are far away from each other and have large differences in orientation, can the object in the overlap area be viewed by such two cameras with large differences in orientation??can the feature point in such different view matched??

SokratG commented 2 years ago

Thank you very much.

As you say, But in outdoor I can place them how in real car and calibrate without problem.

I want to know, When outdoors, where are the cameras installed on the car? Are they farther apart?

someone else (https://github.com/fanDongHust/ROECS_DF/blob/main/shao2019.pdf) correct extrinsic parameters : Relative motion between front and left cameras TLF can be estimated by epipolar constraint using a chessboard placed in the common field of view.

But I prefer to use your method. But I have problems with your method, especially how to undistort fisheye ideally.

After the fisheye image was undistorted, the overlapping area of the adjacent cameras was stretched so much that I cut it out. Because the camera in front of the car looks forward and down, and the camera on the left of the car looks left and down, the two cameras are far away from each other and have large differences in orientation, can the object in the overlap area be viewed by such two cameras with large differences in orientation??can the feature point in such different view matched??

I have not used a fisheye camera and cannot help you how this can be resolved. You should research this area on your own. On the paper you shown, they have references to a calibration based on feature extraction approach see this resources. Also I recommend you watch at the code in Intel/libxcam - this library work well with fisheye lens(only you need is understand how they unwarp camera image).

fanDongHust commented 2 years ago

Also I recommend you watch at the code in Intel/libxcam - this library work well with fisheye lens(only you need is understand how they unwarp camera image).

Thank you for your guidance.

Francis235 commented 2 years ago

Four cameras were calibrated separately without joint calibration, so getExtTranslation() is not useful.

Why not consider putting 4 cameras in the same world coordinate system to jointly calibrate the external parameters and then stich with the calibrated extrinsic parameters? 你好,看了你遇到的一些问题,我现在也遇到了类似的问题,请问你根据这个开源项目进行的实车3d 环视完成了吗?效果怎么样呢?thx

CPFelix commented 1 year ago

@SokratG @fanDongHust My fisheye camera FOV180, which uses corrected pictures for panoramic stitching, is particularly poor. In addition, according to what you mentioned, I placed a calibration board in the adjacent camera field of view for collecting feature points, but the effect was not great. I only tested the stitching of the front and right cameras: Front camera undistortion: front_undistort Right camera undistortion: right_undistort Final effect result Is my undistortion effect is not good, resulting in poor feature point matching? Are over-FOV fisheye cameras unsuitable for panoramic stitching?