Closed kovlo closed 1 year ago
Hi @kovlo ,
Yes, it should be better. I will try to take a look at this by the end of this week.
Should tell you something on Friday ...
Magnificent, thank you very much in advance!
Hi @kovlo,
I created a minimal ros ecosystem to test your calibration
It is all here:
https://github.com/miguelriemoliveira/atom_test_cam_livox
We have a cam_livox_description, containing the urdf
and a cam_livox_calibration, for calibration
Right now I can play back the bagfile ...
One question: are you sure about these border sizes you entered in the config.yml?
I was able to set and initial estimate without problems ...
Now I recorded a dataset with 3 collections. Here it is:
I am inspecting it using the dataset_playback functionality, and figured out the labels are not correct. I think the automatic annotator does not work because of the way points are structured in the livox.
The green points should be on all the pattern, and the black points should only be at the edges of the pattern.
So it means we must manually annotate the dataset ...
For annotating the pattern points I went to the first collection, removed all automatic annotations and then started annotating manually. Here's the result:
You should be careful and look from a side perspective to make sure points all lie on the patterns plane. This does not have to be perfect, it is more or less, moreover because it is clear the lack of quality of this livox lidar in comparison with a velodyne, since points are all over the place.
Now you must do this for all collections.
Here's the second collection
And the third:
The output is a file called dataset_corrected.json, which we should use for calibrating the system.
This is the result of calibration:
Observing a single collection:
All collections:
The results are not visually perfect, but they are much closer than what you had.
Some possible reasons why:
I suggest that you try to replicate my results and then we can work on improving them further.
Hi @kovlo ,
did you succeed in calibrating your system?
One question: are you sure about these border sizes you entered in the config.yml?
I measured them again. The board edge width is 20+/-2mm, the big squares are 12 cm, while the smalls are 8cm.
I managed to reproduce your steps both on your collection, and I created a bigger calibrated collection with 15 manually annotated samples. The calibration errors I got were much lower than I had previously. The 3D visualized results are significantly better:
But the atom evaluation script's results are not as good as I hoped:
I attach my actual collection. Will you be so kind to take a look at it if I still miss something? ATOM_Result.zip
I'm planning to record a new calibration bag following your data collection suggestions.
Hi @kovlo ,
I can try to look into it but only during the weekend or next week.
The data from LIVOX is not great, so I am afraid I am not expecting a very accurate calibration.
I will try anyway and get back to you.
Best regards, Miguel
Hi @kovlo ,
I am sorry but I could not pick this up last week. I will try to take a look this weekend.
Best regards, Miguel
hi @kovlo ,
so finally I could find some time to take a look at this.
I opted to start analysing your dataset labels first, using the dataset_playback functionality.
I looked into the dataset_corrected.json, which I assume is the one you are using for calibration, right?
There are minimal miss labels, which should not affect the calibration's accuracy all that much. Some examples are:
Collection 4 - One of the boudary points (black) on the right is not on the pattern's boundary, but to the inside of the pattern.
Collection 7 - One of the pattern points (green) is too high to belong to the pattern.
Then there are serious mislabels such as the ones in collection 1, where several points a couple of meters behind the pattern are labeled as the pattern:
This will disrupt the calibration. In addition, there are many collections in which there are no labels for the lidar. Some examples are:
Collection 8
Collection 11
... and this is also true for collections 12,13,14,15 and many others.
I used the atom datasets inspection tool, e.g.g
rosrun atom_calibration inspect_atom_dataset -j dataset_corrected.json
and got this:
Dataset contains 2 sensors: ['camera', 'livox']
Complete collections (41):['000', '001', '002', '003', '004', '005', '006', '007', '008', '009', '010', '011', '012', '013', '014', '016', '017', '018', '019', '020', '021', '022', '023', '024', '025', '026', '027', '028', '029', '030', '031', '032', '033', '034', '035', '036', '037', '038', '039', '040', '041']
Incomplete collections (0):[]
Sensor camera has 19 complete detections: ['000', '001', '002', '017', '019', '022', '023', '024', '025', '026', '028', '029', '030', '031', '032', '033', '035', '036', '037']
Sensor camera has 22 partial detections: ['003', '004', '005', '006', '007', '008', '009', '010', '011', '012', '013', '014', '016', '018', '020', '021', '027', '034', '038', '039', '040', '041']
Sensor livox is not a camera. All detections are complete.
+------------+-------------+----------+----------+
| Collection | is complete | camera | livox |
+------------+-------------+----------+----------+
| 000 | yes | detected | detected |
| 001 | yes | detected | detected |
| 002 | yes | detected | detected |
| 003 | yes | partial | detected |
| 004 | yes | partial | detected |
| 005 | yes | partial | detected |
| 006 | yes | partial | detected |
| 007 | yes | partial | detected |
| 008 | yes | partial | detected |
| 009 | yes | partial | detected |
| 010 | yes | partial | detected |
| 011 | yes | partial | detected |
| 012 | yes | partial | detected |
| 013 | yes | partial | detected |
| 014 | yes | partial | detected |
| 016 | yes | partial | detected |
| 017 | yes | detected | detected |
| 018 | yes | partial | detected |
| 019 | yes | detected | detected |
| 020 | yes | partial | detected |
| 021 | yes | partial | detected |
| 022 | yes | detected | detected |
| 023 | yes | detected | detected |
| 024 | yes | detected | detected |
| 025 | yes | detected | detected |
| 026 | yes | detected | detected |
| 027 | yes | partial | detected |
| 028 | yes | detected | detected |
| 029 | yes | detected | detected |
| 030 | yes | detected | detected |
| 031 | yes | detected | detected |
| 032 | yes | detected | detected |
| 033 | yes | detected | detected |
| 034 | yes | partial | detected |
| 035 | yes | detected | detected |
| 036 | yes | detected | detected |
| 037 | yes | detected | detected |
| 038 | yes | partial | detected |
| 039 | yes | partial | detected |
| 040 | yes | partial | detected |
| 041 | yes | partial | detected |
+------------+-------------+----------+----------+
which has the livox detecting the pattern for all collections, which is not true.
So the cause of the inaccurate calibration (at least one of the causes) is that you did not label correctly the livox data in all the dataset collections. Many of them are just not labeled.
Since I knew many of your collections were incorrectly labeled, I went forward with a calibration of a single collection I know is correctly labeled, e.g. 0.
rosrun atom_calibration calibrate -json $ATOM_DATASETS/cam_livox/ATOM_Result/dataset_corrected.json -v -rv -si -uic -csf 'lambda x: int(x)<1' --phased
Before calibration:
After calibration:
Which seems fine by me.
Then I calibrated using more than one collections I knew are well labeled, but the results were not very good. It may be because of this problem:
Start reading from here: https://github.com/lardemua/atom/issues/498#issuecomment-1218389942
Let me know if I can help further.
Hi @miguelriemoliveira!
Thank you for your detailed answer again! I checked what you wrote and I could reproduce your result on a single collection.
I tried to create a clean new collection to avoid the unlabeled collections from the previous one. (There I annotated only a subsection of the lidar clouds intentionally, by assuming that the unannotated point clouds will be skipped by the calibrator.)
At the dataset collection I paid attention to click the collect button, when the camera image showed the same scene as the point cloud. But I noticed, that the display of the detected charuco board image was delayed significantly (Can this be caused by that I run the system in docker?). Upon the dataset correction it seems that the charuco detection image was saved alongside the pcd in each collection, instead what I observed. This caused asynchronous dataset. And this might be the case with the dataset you checked for me as well.
In conclusion, I assume I have to re-record the dataset, where I leave calibration board static at each location for a longer time as you wrote in the manual.
Hi @kovlo ,
Hi @miguelriemoliveira!
In conclusion, I assume I have to re-record the dataset, where I leave calibration board static at each location for a longer time as you wrote in the manual.
yes, docker could delay things and de-synchronize sensor data, which explains why calibrating for a single collection works well, but calibrating for multiple collections does not.
I agree with you, you should create a new dataset (probably record a new bagfile) where these desync problems are solved are at least controlled.
Let me know if I can help.
Hi @miguelriemoliveira
I was checking this issue... Perhaps you know this, but just in case: Livox data is actually really good, but the sensor configuration is different from the 3D LiDARs that we are use to like velodyne, robosense, etc...
This sensor gives elliptical beams (and not circular beams like velodyne). So, some of the algorithms implemented for 3D LiDARs are not suitable for livox - for example, to determine the pattern limits we use spherical coordinates, which does not apply on the livox case.
Best regards, André.
Hi @aaguiar96 ,
Thanks for the input.
Yes, I realized this after looking into the data. And as you anticipate some of the algorithms you developed were failing because of the structure of the point clouds. I fixed that problem.
When I say the quality is low I mean the distance measurement has a lot of noise. Take a look at the images above. With the velodynes, I get much better distance measurements.
Perhaps @kovlo can give some his opinion on the quality of the livox lidars?
I've been trying to use ATOM for calibrating a Livox lidar (https://www.livoxtech.com/avia) to a camera. The camera was placed on the right of the livox, having ~7cm lateral distance between their optical centers.
This is the best result I could get so far for the following command:
rosrun atom_calibration calibrate -json /opt/robows/data/ATOM_Result/dataset.json -v -rv -si -oi -ss 1
I assume, the result should be even better. If you have any suggestions on improving the result, please share them with me.
I uploaded a 30s example data, the xacro and yaml files to this zip (2.9GB): https://drive.google.com/file/d/1Ne4MMLCuI_rXXFdRTta9Nrry1eTfF53S/view?usp=sharing