lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
252 stars 26 forks source link

Generate final agrob paper results #234

Closed aaguiar96 closed 3 years ago

aaguiar96 commented 4 years ago

Hi @miguelriemoliveira and @eupedrosa

I think I have all the tools I need to start generating all the results for agrob paper. For now, I have two questions:

and @miguelriemoliveira answered:

It is not a question of if you can, you must do it. Because otherwise you could not ensure you were using the same images and point clouds and could not say that the difference in results is solely due to the change in the initial guess. I suggest you get one dataset with a good first guess (the best you can get). Then you add noise to the adequate transformations in the json (or when loading) so that you have something like: perfect first guess, perfect first guess with 1% noise, perfect first guess 2% noise, ... , perfect first guess with 20% noise. What do you think?

This seems fine to me. My only question is how do I generate that noise? I can use the first guess tool that we implemented in atom, generate first guesses, and change the unique dataset file with the first guesses generated. What do you think? Do you have a better option?

miguelriemoliveira commented 4 years ago

This seems fine to me. My only question is how do I generate that noise? I can use the first guess tool that we implemented in atom, generate first guesses, and change the unique dataset file with the > first guesses generated. What do you think? Do you have a better option?

I would add an option on the loader to add the noise to the transformations (the question is to which of those). It is better than to have lots of datasets.

For example, suppose you want to add 1% noise to some transformation T.

You have the rotation angles (must convert from the quaternion) and the translation vector.

rx, ry, rz , tx, ty, tz

Just randomize a number between 0.99 and 1.01 (for 1%) and multiply by those 6 parameters.

aaguiar96 commented 4 years ago

I would add an option on the loader to add the noise to the transformations (the question is to which of those). It is better than to have lots of datasets.

Hm, ok. That's an option. My idea was to have a single dataset, but multiple initial guesses generated from atom. Then, I would have to change the config file each time I want to change the initial guess.

miguelriemoliveira commented 4 years ago

I also like your idea. Any would be fine I think. The problem is you would not be able to quantify each noisy dataset like 1%, 2%, etc. You would have to say the perfect, the good, the average, the bad, the lousy. That's also a possibility,

aaguiar96 commented 4 years ago

I also like your idea. Any would be fine I think. The problem is you would not be able to quantify each noisy dataset like 1%, 2%, etc. You would have to say the perfect, the good, the average, the bad, the lousy. That's also a possibility,

Hm, ok! I will wait for @eupedrosa's opinion, also about the second question. For now, I will start generating all the other results, I leave these ones for when I finish all the others.

miguelriemoliveira commented 4 years ago

My second question is related with #130. We should we the same first guess for the ICP calibration that we used as best first guess for atom right? How to we emulate this first guess? If I use cloud compare software, I don't know how to do it. The most direct way I'm seeing is to implement a script that receives the two pointclouds and the initial guess, and uses the ICP library to perform the alignment. What do you think?

I would prefer that we have a script, because we could reuse it in the future. With cloud compare the only way I see to ensure the same first guess would be to to change the position of the points before loading them into cloud compare. But that would be very complex and generate new point clouds (I don't like it). I suggest that if we do go to ICP them we assume we will use different first guesses.

aaguiar96 commented 4 years ago

I would prefer that we have a script, because we could reuse it in the future. With cloud compare the only way I see to ensure the same first guess would be to to change the position of the points before loading them into cloud compare. But that would be very complex and generate new point clouds (I don't like it). I suggest that if we do go to ICP them we assume we will use different first guesses.

Ok, I agree! I'll work on the script. It should not be complicated.

eupedrosa commented 4 years ago

Hm, ok! I will wait for @eupedrosa's opinion, also about the second question.

I'm with @miguelriemoliveira on this one. We should be able to quantify how much error we can apply to the initial guess. We can even use the optimized poses as initial guess. This way, we known that the final result should be equal to the initial one. We can even use Pode Differences to quantify the error.

About the second question. Again aligned with @miguelriemoliveira point of view.

aaguiar96 commented 4 years ago

I'm with @miguelriemoliveira on this one. We should be able to quantify how much error we can apply to the initial guess. We can even use the optimized poses as initial guess. This way, we known that the final result should be equal to the initial one. We can even use Pode Differences to quantify the error.

About the second question. Again aligned with @miguelriemoliveira point of view.

Ok great. So, I think the generation of noisy initial guesses will deserve another isolated issue. As soon as I start working on it, I'll create it. Thanks @miguelriemoliveira and @eupedrosa

aaguiar96 commented 4 years ago

If you want to test:

Train dataset: https://drive.google.com/file/d/1i3Won2hSNDBVZPy0oVz6bOD3acX9c3DA/view?usp=sharing

Test dataset: https://drive.google.com/file/d/1BysyBaDBhTxXMPgJHHkkqk-fKssp_OkE/view?usp=sharing

aaguiar96 commented 4 years ago

Hi @miguelriemoliveira and @eupedrosa

So, I have generated all the results except the characterization of the initial guess, OpenCV, and ICP. I removed the analysis of partial detections, since almost all the collections are partial. Even so, we have many many results!

See here the summary tables. Results missing: M1, M2, M5, M6, M26-M31.

Screenshot from 2020-09-22 17-20-34

Screenshot from 2020-09-22 17-20-31

So, the open issues are:

  1. Generate chain of transformations from a calibration result (OpenCV, factory, ICP), since those give us sensor to sensor calibration.
  2. Implement ICP script
  3. Implement noise addition to initial guess

I think I will start from 3., because in that case, all the ATOM results become generated! :+1:

@miguelriemoliveira and @eupedrosa do you have any hint on how to do this?

miguelriemoliveira commented 4 years ago

Hi @aaguiar96 ,

great work. I took a look at the tables, they look really nice. I would only suggest that you use only 3 decimal places instead of four. It would make the tables lighter.

About 3. did we not discuss this yesterday? What is your question? I am lost ...

eupedrosa commented 4 years ago

@miguelriemoliveira and @eupedrosa do you have any hint on how to do this?

No hints, but my thoughts :brain: and prayers :pray: are with you!

aaguiar96 commented 4 years ago

About 3. did we not discuss this yesterday? What is your question? I am lost ...

Sorry, youre right. You said this:

I would add an option on the loader to add the noise to the transformations (the question is to which of those). It is better than to have lots of datasets.

For example, suppose you want to add 1% noise to some transformation T.

You have the rotation angles (must convert from the quaternion) and the translation vector.

rx, ry, rz , tx, ty, tz

Just randomize a number between 0.99 and 1.01 (for 1%) and multiply by those 6 parameters.

So, just pick some transformation (maybe the one we are optimizing), and add that percentage of noise? Should I do that on calibrate script?

aaguiar96 commented 4 years ago

No hints, but my thoughts brain and prayers pray are with you!

Thanks @eupedrosa. Today I am particularly inspired, it must be it... :rofl:

miguelriemoliveira commented 4 years ago

Hi @aaguiar96 ,

yes, in the calibrate (using some argument).

Concerning which transformations to add noise to, this is from the hand-eye paper:

Optimization procedures suffer from the known problem of local minima. This problem tends to occur when the initial solution is far from the optimal parameter configuration, and may lead to failure in finding adequate parameter values. The problem is tackled by ensuring a plausible first guess for the estimated parameters. There are several parameters to be estimated as seen in (7). We make use of different initialization strategies depending on the nature of the parameter. Intrinsic and distortion parameters ( k̂ and d̂) are initialized by running a prior intrinsic camera calibration. Concerning the atomic transformations ({ T̂ k }), these can be related to the camera or the pattern as stated in (10) or (11). To initialize camera related transformations we developed an interactive tool which parses the URDF and creates an 3D visualization tool for ROS (RVIZ) interactive marker associated with each sensor. It is then possible to move and rotate the markers and position the sensor at will. This provides a simple, interactive method to easily generate plausible first guesses for the poses of the sensors. Real time visual feedback is provided to the user by the observation of the 3D models of the several components of the robot model and how they are put together, e.g. where the camera is positioned w.r.t. the end effector, or where a Light Detection And Ranging (LiDAR) is placed w.r.t. the robot. Also, for multi-sensor systems, one can observe how well the data from a pair of sensors overlap. Some examples of this procedure are given in Figure 6, and videos for the eye-on-hand 5 , eye-to-base 6 and joint-hand-base 7 use cases. In addition to this, we also provide an example of setting the initial estimate for an intelligent vehicle 8 . Concerning the atomic transformations associated with the calibration pattern ( pat T̂ i base in (10)), these are initialized by defining a new branch in the transformation tree which connects the pattern to frame to which it is fixed, e.g. for the eye-on-hand case it is

where rgb opt T pat is estimated by solving the perspective- i N-point for the detected pattern corners [29, 30, 31], and base rgb opt A i is the aggregate transformation computed by de- riving its topology from the tf tree and using the initial values for each atomic transformation in the chain.

so we have 3 different different initializations depending on the nature of the parameter. I would say you should only add noise to the atomic transformation which are listed for calibration in the yaml file, i.e. those that we set manually using rviz. So you can read the config dictionary inside the dataset to figure out which transformations you must add noise to, if a noise flag is activated.

aaguiar96 commented 4 years ago

Ok @miguelriemoliveira thanks. I'll work on that.

eupedrosa commented 4 years ago

Hi @miguelriemoliveira and @aaguiar96.

Matlab 2020b has a new Lidar Toolbox with a Lidar and Camera Calibration module. Maybe it is worth checking it out. It references an IROS and a PAMI paper.

aaguiar96 commented 4 years ago

This is a mystery...

The problem can only be in the ICP transformation...

I performed the following test:

image

Now, with the exact same code, I change only this line:

atomic_tf = atomicTfFromCalibration(dataset, asensor, osensor, M)

where instead of using the transformation from the json, I used np.dot(m2,m1) from ICP (and all the other possible combinations). None of them give good results...

aaguiar96 commented 4 years ago

Hi @miguelriemoliveira and @eupedrosa

I just added now the RMS images feature to camera to camera atom evaluation script. Here is the result for ATOM full:

raw_cam2cam

calibrated_cam2cam

eupedrosa commented 4 years ago

@aaguiar96, for now remove the x and y limits, and let matplotlib do it automatically. It will provide better images.

This images already tell us something. Before calibration we have wide spreead errors with an offset. And the calibration fixes that. However, there are some (or just one?) collections with an offset. I believe this final offset is due to a skew in data synchronization.

aaguiar96 commented 4 years ago

@aaguiar96, for now remove the x and y limits, and let matplotlib do it automatically. It will provide better images.

Hi @eupedrosa. I fixed the limits because these are the images that go to the paper, and I think it is better to have the same limits in all of them.

This images already tell us something. Before calibration we have wide spreead errors with an offset. And the calibration fixes that. However, there are some (or just one?) collections with an offset. I believe this final offset is due to a skew in data synchronization.

Yes, there is one "bad" collection here. And I agree, it may be due to synchronization problems!

miguelriemoliveira commented 4 years ago

Hi. Setti g the same limits is a good idea. I just suggest symetric axis, e.g. -5 to 5 instead of -10 to 5.

We could remove the bad collection, but for comparison purposes with other approaches perhaps we can leave it as is..

aaguiar96 commented 4 years ago

Hi @miguelriemoliveira and @eupedrosa

Today I was writing and organizing the paper. What I've done:

What's missing:

My idea for the meeting tomorrow is to collect our global conclusions about the results, to be able to write the rest of the paper this week.

aaguiar96 commented 3 years ago

This is done.

miguelriemoliveira commented 3 years ago

I see today is githhub maintenance day!