ankitdhall / lidar_camera_calibration

ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
http://arxiv.org/abs/1705.09785
GNU General Public License v3.0
1.49k stars 460 forks source link

Do you analyse scale also? #19

Closed Tamme closed 6 years ago

Tamme commented 6 years ago

Hi.

When using VLP-16 and Logitech c930 and they are positioned close to each other, object appear with different sizes in the two sensors. Also when calculating the scales from the transformation matrix or transforming RGB pixels on top of the pointcloud, the scale difference is visible as seen on the Figure with two aruco cartboards.

image

Is it correct that scale information is not considered or the problem is probably somewhere else?

karnikram commented 6 years ago

Hi,

The scale information is not being used anywhere. The method only uses 3D point correspondences, which are the 8 corners of the two marker boards.

There's probably something else going wrong. Can I know how you're reprojecting color onto the point cloud?

Tamme commented 6 years ago

Hi. Yes, the problem could be elsehere also. For reprojecting, I'm rotating the pointcloud points according to the transformation matrix from the find_transform.launch result and then doing a dot product with the Projection/camera matrix of the camera. This gives the 2D coordinates of 3D points. Then I take the corresponding color from the 2D camera frame and and use it for the 3D point.

As a reference, I used this: https://github.com/swyphcosmo/ros-camera-lidar-calibration/blob/master/scripts/lidar_image_calibration/lidar_rgb.py

karnikram commented 6 years ago

Are you sure about the quality of your intrinsics?

It seems like the output translation estimate along the z axis (forward of camera) is incorrect.

Anyway, the following is a simpler matlab script that I use for debugging. It plots the point correspondences directly in 3D after transforming them into a common frame to help with visualizing the registration and error, if any. See if it helps understand your situation better.


clear all;
close all;
fileLoc = '/home/karnik/iiit_ws/src/lidar_camera_calibration/conf/points.txt';
file = fopen(fileLoc, 'r');
numPoints = str2double(fgetl(file));
tline = fgetl(file);
index = 2;
lidarPoints = [];
while(ischar(tline) && index < 10)
    linesplits = strsplit(tline);
    lidarPoints = [lidarPoints; str2double(linesplits)];
    index = index + 1;
    tline = fgetl(file);
end

cameraPoints= [];

while(ischar(tline) && index < 18)
    linesplits = strsplit(tline);
    cameraPoints = [cameraPoints; str2double(linesplits)];
    index = index + 1;
    tline = fgetl(file);
end

fclose(file);

% Estimated transformation
T =  [0.997875 -0.0192326  -0.062254   -0.24824;
     0.0148479   0.997426 -0.0701444  -0.396634;
     0.0634428   0.069071   0.995592  -0.650464;
     0          0          0          1];

figure
xlabel('X');
ylabel('Y');
zlabel('Z');

homolidarPoints = lidarPoints';
homolidarPoints = [homolidarPoints; ones(1,size(lidarPoints,1))];

transformedhomolidarPoints = T * homolidarPoints;

hold on;
scatter3(cameraPoints(:,1),cameraPoints(:,2),cameraPoints(:,3),'filled','r');
scatter3(transformedhomolidarPoints(1,:),transformedhomolidarPoints(2,:),transformedhomolidarPoints(3,:),'filled','b');
legend('camera points', 'lidar points');

thlidarPointsTranspose = transformedhomolidarPoints(1:3,:)';
postError = (cameraPoints(:) - thlidarPointsTranspose(:)).^2;
postRmse = sqrt(sum(postError(:))/8);

disp(postRmse);
Tamme commented 6 years ago

I dont have matlab so I transformed it into python. Does the original points.txt in the repo correspond to the T in your script so the pointclouds should allign? I could use this to verify my code is correct.

As for the intrinsics, they varied if I did calibration several times, but always looked plausible. Yeah, the z axis was the one that looked the most out of place.

Here is the result on calibration, while I had my camera quite exactly below lidar:

After 100 iterations:


Average translation is: -0.0529785 -0.132302 0.415406 Average rotation is: 0.994788 -0.00719048 0.101711 -0.00447082 0.993475 0.113961 -0.101867 -0.113821 0.988265 Average transformation is: 0.994788 -0.00719048 0.101711 -0.0529785 -0.00447082 0.993475 0.113961 -0.132302 -0.101867 -0.113821 0.988265 0.415406 0 0 0 1 Final rotation is: 0.102503 -0.994712 0.00639842 0.113957 0.00535264 -0.993471 0.988184 0.102563 0.113903 Final ypr is: 0.838262 -1.41691 0.733061 Average RMSE is: 0.207889 RMSE on average transformation is: 0.209338

karnikram commented 6 years ago

The coordinates inside points.txt are estimates of the marker board's corner points with respect to the individual frames, not after aligning them. Read the wiki! The script I gave you aligns them with the output T and plots them. It's just an alternative way of checking your result.

Okay, now clearly the Z estimate is incorrect. Check if the coordinates reported inside points.txt are correct. They should all be quite close except along the Y axis.

Irenedvs commented 6 years ago

@Tamme did you figure out a solution for this problem ? I have the same issue , where the scales / correspondence between the Lidar and camera dont match .

Tamme commented 6 years ago

Hi, sry I didnt directly solve it. I think I handtuned the coefficents after and then got them to be kind of correct, but dont remember what was the direct solution for the scaling problem.

SiddhantNadkarni commented 4 years ago

@Tamme which coefficients did you hand-tune? The camera intrinsics?

SiddhantNadkarni commented 4 years ago

Same issue, has anyone managed to resolve this?