lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
247 stars 27 forks source link

Refine ZED-Velodyne calibration #179

Closed aaguiar96 closed 4 years ago

aaguiar96 commented 4 years ago

Now that we have the objective function working, we should:

eupedrosa commented 4 years ago

Following #149, @aaguiar96 that rotation the ZED camera have may be cause by that -83 in the P matrix. Try to redo the calibration now, but using the K matrix to see if anything changes.

aaguiar96 commented 4 years ago

Following #149, @aaguiar96 that rotation the ZED camera have may be cause by that -83 in the P matrix. Try to redo the calibration now, but using the K matrix to see if anything changes.

Ok, but since, the -83 is not correct right? I'll go to the lab on Monday and try to recalibrate the camera and record a new dataset.

eupedrosa commented 4 years ago

You do not need a new dataset, you can use the one that you have right now. The K matrix does not have that -83.

miguelriemoliveira commented 4 years ago

Hi @aaguiar96, when you say "Record a new dataset" you mean record a new bag file, right?

@eupedrosa , if @aaguiar96 is going to the lab he could indeed record a new bag file because the one we have right now has several problems (#157 )

eupedrosa commented 4 years ago

I known!! I was just curious to see if we already have a working solution...

aaguiar96 commented 4 years ago

I known!! I was just curious to see if we already have a working solution...

I don't think it's working...

rviz_screenshot_2020_06_13-12_54_37 rviz_screenshot_2020_06_13-12_55_36

aaguiar96 commented 4 years ago

Hi @aaguiar96, when you say "Record a new dataset" you mean record a new bag file, right?

@eupedrosa , if @aaguiar96 is going to the lab he could indeed record a new bag file because the one we have right now has several problems (#157 )

Yes, I don't know if I'll have time to solve all the issues since I have to perform the calibration, but at least I can record a longer dataset, with the pattern closer to the camera.

eupedrosa commented 4 years ago

Whitout the camera I cannot judge :\ It is not visible. Can you add the tf tree in the visualization?

aaguiar96 commented 4 years ago

Whitout the camera I cannot judge :\ It is not visible. Can you add the tf tree in the visualization?

The camera is there.

rviz_screenshot_2020_06_13-12_55_36

eupedrosa commented 4 years ago

Just to be clear, this calibration uses the K matrix or the `P' matrix?

aaguiar96 commented 4 years ago

Just to be clear, this calibration uses the K matrix or the `P' matrix?

The K matrix...

aaguiar96 commented 4 years ago

Calibration result:

Left:

image_width: 1280
image_height: 720
camera_name: narrow_stereo/left
camera_matrix:
  rows: 3
  cols: 3
  data: [652.25783,   0.     , 670.14407,
           0.     , 652.16532, 365.16427,
           0.     ,   0.     ,   1.     ]
camera_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.196334, 0.034772, -0.002054, 0.003261, 0.000000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [ 0.99760051,  0.00946769, -0.06858274,
         -0.00884404,  0.99991679,  0.00939135,
          0.06866594, -0.00876227,  0.99760123]
projection_matrix:
  rows: 3
  cols: 4
  data: [672.44807,   0.     , 822.27319,   0.     ,
           0.     , 672.44807, 366.49005,   0.     ,
           0.     ,   0.     ,   1.     ,   0.     ]

Right:

image_width: 1280
image_height: 720
camera_name: narrow_stereo/right
camera_matrix:
  rows: 3
  cols: 3
  data: [672.6178 ,   0.     , 650.11811,
           0.     , 672.20899, 380.37149,
           0.     ,   0.     ,   1.     ]
camera_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.233312, 0.078484, -0.003954, 0.002391, 0.000000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [ 0.99575785,  0.00898342, -0.09157295,
         -0.00981562,  0.99991449, -0.00864153,
          0.09148749,  0.00950371,  0.99576087]
projection_matrix:
  rows: 3
  cols: 4
  data: [672.44807,   0.     , 822.27319, -78.89036,
           0.     , 672.44807, 366.49005,   0.     ,
           0.     ,   0.     ,   1.     ,   0.     ]
aaguiar96 commented 4 years ago

Btw, the -78 appears again. This might me the translation of the right camera in relation with the left one...

I doubt that the manufactury calibration is wrong...

eupedrosa commented 4 years ago

I also doubt it is wrong. But we should only care about the K matrix. However, for that to work, we need to have access to the unrectified images.

The image the ZED sdk returns, is it rectified or not? or it publishes both?

miguelriemoliveira commented 4 years ago

Hi,

That was an advance. Great work!

How about a comparison between the K matrix now and the k matrix we had before?

Also, to know if the calibration was any good can you tell us the reprojection error (printed by the ros calibration) as well as the number of images?

Recording the non rectified is important...

On Mon, Jun 15, 2020, 12:26 Eurico F. Pedrosa notifications@github.com wrote:

I also doubt it is wrong. But we should only care about the K matrix. However, for that to work, we need to have access to the unrectified images.

The image the ZED sdk returns, is it rectified or not? or it publishes both?

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/179#issuecomment-644070685, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVSK7SG5RJ6WRM5UKVTRWYANJANCNFSM4N46MZ2Q .

aaguiar96 commented 4 years ago

I also doubt it is wrong. But we should only care about the K matrix. However, for that to work, we need to have access to the unrectified images.

The image the ZED sdk returns, is it rectified or not? or it publishes both?

It publishes both. Tomorrow I will record both. I guess that the one we're using is rectified... Can that be a problem on the optimization?

aaguiar96 commented 4 years ago

Also, to know if the calibration was any good can you tell us the reprojection error (printed by the ros calibration) as well as the number of images?

The calibration have 52 images for each camera. I did not save the reprojection error, I don't know it now...

miguelriemoliveira commented 4 years ago

It publishes both. Tomorrow I will record both. I guess that the one we're using is rectified... Can that be a problem on the optimization?

Hi @aaguiar96 , you should record only the raw (unrectified image). It is a problem for the optimization because we would be undistorting and undistorted image.

aaguiar96 commented 4 years ago

Hi @aaguiar96 , you should record only the raw (unrectified image). It is a problem for the optimization because we would be undistorting and undistorted image.

Ok so... This could improve the result! Tomorrow I'll fix this! :)

aaguiar96 commented 4 years ago

I @eupedrosa

How did you overcome the blocking problem when saving a collection? It's happening to me, even with bag_rate = 0.5 ...

Here's my command:

roslaunch agrob_calibration collect_data.launch output_folder:=$ATOM_DATASETS bag_rate:=0.5 overwrite:=true
aaguiar96 commented 4 years ago

Btw, the rosbag looks really nice this time! :)

aaguiar96 commented 4 years ago

Another thing, did you adjust the initial_estimate?

The robot is really misaligned with the pattern points. See here (for a frame where I am in front of the camera): rviz_screenshot_2020_06_19-11_16_55

eupedrosa commented 4 years ago

How did you overcome the blocking problem when saving a collection?

It has something to do with difference of timestamps. I'm just trying to fix it and I'll will push it.

Another thing, did you adjust the initial_estimate?

Yes I did.

aaguiar96 commented 4 years ago

It has something to do with difference of timestamps. I'm just trying to fix it and I'll will push it.

Ok... But do you think that they are too different? They just differ on the nsecs parameter...

eupedrosa commented 4 years ago

@aaguiar96, I solved it. You cal pull the change. It was a typo in the key used in the config dictionary.

This is something fragil in our code. There is no hardening against this kinda stuff.

miguelriemoliveira commented 4 years ago

Hi guys,

sorry, I have had no time to work on this. Today I think I will have some.

Eurico, can you describe what was wrong in more detail?

The code should output something or abort, getting stuck is the worst case scenario ...

miguelriemoliveira commented 4 years ago

Hi @aaguiar96 ,

Another thing, did you adjust the initial_estimate?

The robot is really misaligned with the pattern points. See here (for a frame where I am in front of the camera):

Perhaps the agrob_description could have a xacro with a better estimate. The problem I think is that the velodyne is rotated some 45 degrees. If the original xacro would without these 45 degrees we could skip the set_initial_estimate step.

eupedrosa commented 4 years ago

There was a typo here:

https://github.com/lardemua/atom/blob/a2648562fa009b587c125c240f7d8be78172ab4a/atom_calibration/src/atom_calibration/data_collector_and_labeler.py#L190

see 1dce8e0a76230afcc74cc07b2ba647f9de77f7d5

It raised an exception, but for some reason that exception is caught and ignored. As consequence the lock in the labels is never released.

miguelriemoliveira commented 4 years ago

it raised an exception, but for some reason that exception is caught and ignored. As consequence the lock in the labels is never released.

thanks for the analysis @eupedrosa . My opinion is that the real problem is with the lock. Perhaps we can unlock when catching and error ... I will try it.

eupedrosa commented 4 years ago

@miguelriemoliveira, check #186.

miguelriemoliveira commented 4 years ago

NICE!

<?xml version="1.0"?>
<!--

          █████╗ ████████╗ ██████╗ ███╗   ███╗
         ██╔══██╗╚══██╔══╝██╔═══██╗████╗ ████║
         ███████║   ██║   ██║   ██║██╔████╔██║
         ██╔══██║   ██║   ██║   ██║██║╚██╔╝██║
  __     ██║  ██║   ██║   ╚██████╔╝██║ ╚═╝ ██║    _
 / _|    ╚═╝  ╚═╝   ╚═╝    ╚═════╝ ╚═╝     ╚═╝   | |
 | |_ _ __ __ _ _ __ ___   _____      _____  _ __| | __
 |  _| '__/ _` | '_ ` _ \ / _ \ \ /\ / / _ \| '__| |/ /
 | | | | | (_| | | | | | |  __/\ V  V / (_) | |  |   <
 |_| |_|  \__,_|_| |_| |_|\___| \_/\_/ \___/|_|  |_|\_\
 https://github.com/lardemua/atom
-->

<!-- WARNING WARNING WARNING WARNING auto-generated file!! -->
<!-- Only modify this file if you know what you are doing! -->

<!--
@file collect_data.launch Runs bringup collecting data from a bag file.

@arg output_folder Directory where the data will be stored.
@arg overwrite     If true, it will overwrite any existing output folder.
@arg marker_size   The size of the interaction marker that is used to trigger a data save.

@arg bag_file  Absolute path to the playing bag.
    default: /home/mike/bagfiles/calibration_5_dez_2019_03m-07m.bag
@arg bag_start Playback starting time (in seconds). default: 0.0
@arg bag_rate  Playback rate. default: 1.0
-->

@eupedrosa and @aaguiar96 , How about also in the config.yml a header like this? It is there already. Great, looks nice!

aaguiar96 commented 4 years ago

Perhaps the agrob_description could have a xacro with a better estimate. The problem I think is that the velodyne is rotated some 45 degrees. If the original xacro would without these 45 degrees we could skip the set_initial_estimate step.

I'm having trouble setting the initial estimate... Do I have to rotate the sensors? I do I do that?...

eupedrosa commented 4 years ago

What @miguelriemoliveira is suggesting is to the original urdf and add the rotation. Go to your xacro file and search for this:

<!-- velodyne-16 model -->
<xacro:vlp16_model name="vlp16" parent="tower_link">
      <origin xyz="0 0.6327 0" rpy="-1.57 0 0"/>
</xacro:vlp16_model>

Then change it to

<!-- velodyne-16 model -->
<xacro:vlp16_model name="vlp16" parent="tower_link">
      <origin xyz="0 0.6327 0" rpy="-1.57 -1.0 0"/>
</xacro:vlp16_model>

Then rosrun agrob_calibration configure. With this change we do not need to do roslaunch agrob_calibration set_initial_estimate.launch.

aaguiar96 commented 4 years ago

Thanks @eupedrosa

I got one dataset with 9 collections. I did not record more because in a significant part of the bagfile, my body appears as a part of the labelled pattern... I will have to review the labelling procedure to avoid this.

Anyway, these are 9 safe collections, with different orientations and good labels. It is a good dataset to test.

Here: datasets.zip

aaguiar96 commented 4 years ago

I am having the same result as @eupedrosa

rviz_screenshot_2020_06_19-18_02_44

Also, I cannot see the cloud limit points... This was not happening in the previous version with the old dataset...

I will have to deeply debug what changed!

miguelriemoliveira commented 4 years ago

I will try to work on this this weekend. First I will work on #189 though.

aaguiar96 commented 4 years ago

Hi @eupedrosa

The problem causing this:

The laser beams do not correspond with the full pattern..

Is in the data collection. See this, in the images on the left, the laser beams do not fill the entire pattern.

rviz_screenshot_2020_06_24-09_04_17

We have to save collections taking into account this.

aaguiar96 commented 4 years ago

@miguelriemoliveira those images on the left are really helpful, thanks! :)

I added another simple level of filtering to remove points from the body on the labelled data points, in the interactive_data_labeler. Now I will record a dataset where the collections must meet the following criteria:

(Just an idea) : maybe in the future we can have an objective function that can deal with LiDAR partial detections on the pattern. :)

miguelriemoliveira commented 4 years ago

Yes. Good idea. For the future :)

About the new agrob dataset you did it has some problems right? I have this

image

Which is what you said above right? So this dataset is not usable and you will send a new one?

aaguiar96 commented 4 years ago

Which is what you said above right? So this dataset is not usable and you will send a new one?

Yes @miguelriemoliveira I'm working on it.

eupedrosa commented 4 years ago

So, the problem is in the labelling... right?

aaguiar96 commented 4 years ago

Hi @miguelriemoliveira and @eupedrosa

Can you help me understanding this? I'm collecting data, and I stopped the bagfile here, measured the height of the labelled pattern and got 0.41m, it should be 0.6m (see the figure, black line vertical and lenght in the bottom left)... However, the LiDAR points seem to cover all the pattern. What am I seeing wrong?

rviz_screenshot_2020_06_24-10_38_19

miguelriemoliveira commented 4 years ago

Hi @aaguiar96 ,

I remember you used a fixed pattern size to select only a subset of the point cloud and then apply ransac to it ... could it be that this size is hardcoded for the old smaller pattern and is cutting this larger pattern in half?

aaguiar96 commented 4 years ago

I remember you used a fixed pattern size to select only a subset of the point cloud and then apply ransac to it ... could it be that this size is hardcoded for the old smaller pattern and is cutting this larger pattern in half?

No, I use now the dictionary that has the configs to set the sizes... But this is not the problem. If we look for the raw point cloud, there are no other cloud points on the pattern. I think the velodyne is not covering the entire pattern. See here again, without labels, only the raw cloud. On the images on the left, it seems that the velodyne is covering the entire pattern, but when I measure the height, I get again 0.4 meters...

rviz_screenshot_2020_06_24-10_50_17

This is causing the problem that @eupedrosa saw on the calibration

miguelriemoliveira commented 4 years ago

I think the bottom part of the pattern is not being captured by the velodyne. Since you were too close, the lower layer of the lidar hit the middle of the pattern and so the bottom part is not detected ... could it be? For times in the bag where you are farther away does the lidar capture all of the pattern? The solution would be to hold the pattern higher for that distance ...

aaguiar96 commented 4 years ago

I think the bottom part of the pattern is not being captured by the velodyne. Since you were too close, the lower layer of the lidar hit the middle of the pattern and so the bottom part is not detected ... could it be? For times in the bag where you are farther away does the lidar capture all of the pattern?

Ok, I'll test that. But so the labelled images on the left do not represent the LiDAR points?

miguelriemoliveira commented 4 years ago

There are images and cameras displays.

Images only show the images and the detected pattern corners.

Cameras shows the images mixed with 3D data from rviz (e.g. the points from the velodyne). But this mixing of image and 3D data requires a good calibration which you are not sure to have during the dataset collection stage.

That is meant as a visual feedback for better setting an initial estimate.

aaguiar96 commented 4 years ago

There are images and cameras displays.

Images only show the images and the detected pattern corners.

Cameras shows the images mixed with 3D data from rviz (e.g. the points from the velodyne). But this mixing of image and 3D data requires a good calibration which you are not sure to have during the dataset collection stage.

That is meant as a visual feedback for better setting an initial estimate.

Ok, got it.

But you're right, that's the problem.

Here I measure a height of 0.58 meters. However, I have labelled points on the body.

Should I fix the labelling procedure first?...

aaguiar96 commented 4 years ago

It is worth noting that in the majority of the bagfile, the velodyne does not capture the entire pattern. So, maybe I'll have to record a new one, right?

I cannot be too close nor too far so that we the charuco is detected and the velodyne captures the entire pattern...

miguelriemoliveira commented 4 years ago

Yes, I think you need a new bag file. If possible you have to look at the sensor data while recording the bag to make sure you are capturing good data (data in which all sensors capture the pattern with sufficient quality for labelling).

I told you that usually we need about 20 or 30 bag files to get one right ... You never believe me :) ...