Closed aaguiar96 closed 3 years ago
This seems fine to me. My only question is how do I generate that noise? I can use the first guess tool that we implemented in atom, generate first guesses, and change the unique dataset file with the > first guesses generated. What do you think? Do you have a better option?
I would add an option on the loader to add the noise to the transformations (the question is to which of those). It is better than to have lots of datasets.
For example, suppose you want to add 1% noise to some transformation T.
You have the rotation angles (must convert from the quaternion) and the translation vector.
rx, ry, rz , tx, ty, tz
Just randomize a number between 0.99 and 1.01 (for 1%) and multiply by those 6 parameters.
I would add an option on the loader to add the noise to the transformations (the question is to which of those). It is better than to have lots of datasets.
Hm, ok. That's an option. My idea was to have a single dataset, but multiple initial guesses generated from atom. Then, I would have to change the config file each time I want to change the initial guess.
I also like your idea. Any would be fine I think. The problem is you would not be able to quantify each noisy dataset like 1%, 2%, etc. You would have to say the perfect, the good, the average, the bad, the lousy. That's also a possibility,
I also like your idea. Any would be fine I think. The problem is you would not be able to quantify each noisy dataset like 1%, 2%, etc. You would have to say the perfect, the good, the average, the bad, the lousy. That's also a possibility,
Hm, ok! I will wait for @eupedrosa's opinion, also about the second question. For now, I will start generating all the other results, I leave these ones for when I finish all the others.
My second question is related with #130. We should we the same first guess for the ICP calibration that we used as best first guess for atom right? How to we emulate this first guess? If I use cloud compare software, I don't know how to do it. The most direct way I'm seeing is to implement a script that receives the two pointclouds and the initial guess, and uses the ICP library to perform the alignment. What do you think?
I would prefer that we have a script, because we could reuse it in the future. With cloud compare the only way I see to ensure the same first guess would be to to change the position of the points before loading them into cloud compare. But that would be very complex and generate new point clouds (I don't like it). I suggest that if we do go to ICP them we assume we will use different first guesses.
I would prefer that we have a script, because we could reuse it in the future. With cloud compare the only way I see to ensure the same first guess would be to to change the position of the points before loading them into cloud compare. But that would be very complex and generate new point clouds (I don't like it). I suggest that if we do go to ICP them we assume we will use different first guesses.
Ok, I agree! I'll work on the script. It should not be complicated.
Hm, ok! I will wait for @eupedrosa's opinion, also about the second question.
I'm with @miguelriemoliveira on this one. We should be able to quantify how much error we can apply to the initial guess. We can even use the optimized poses as initial guess. This way, we known that the final result should be equal to the initial one. We can even use Pode Differences to quantify the error.
About the second question. Again aligned with @miguelriemoliveira point of view.
I'm with @miguelriemoliveira on this one. We should be able to quantify how much error we can apply to the initial guess. We can even use the optimized poses as initial guess. This way, we known that the final result should be equal to the initial one. We can even use Pode Differences to quantify the error.
About the second question. Again aligned with @miguelriemoliveira point of view.
Ok great. So, I think the generation of noisy initial guesses will deserve another isolated issue. As soon as I start working on it, I'll create it. Thanks @miguelriemoliveira and @eupedrosa
If you want to test:
Train dataset: https://drive.google.com/file/d/1i3Won2hSNDBVZPy0oVz6bOD3acX9c3DA/view?usp=sharing
Test dataset: https://drive.google.com/file/d/1BysyBaDBhTxXMPgJHHkkqk-fKssp_OkE/view?usp=sharing
Hi @miguelriemoliveira and @eupedrosa
So, I have generated all the results except the characterization of the initial guess, OpenCV, and ICP. I removed the analysis of partial detections, since almost all the collections are partial. Even so, we have many many results!
See here the summary tables. Results missing: M1, M2, M5, M6, M26-M31.
So, the open issues are:
I think I will start from 3., because in that case, all the ATOM results become generated! :+1:
@miguelriemoliveira and @eupedrosa do you have any hint on how to do this?
Hi @aaguiar96 ,
great work. I took a look at the tables, they look really nice. I would only suggest that you use only 3 decimal places instead of four. It would make the tables lighter.
About 3. did we not discuss this yesterday? What is your question? I am lost ...
@miguelriemoliveira and @eupedrosa do you have any hint on how to do this?
No hints, but my thoughts :brain: and prayers :pray: are with you!
About 3. did we not discuss this yesterday? What is your question? I am lost ...
Sorry, youre right. You said this:
I would add an option on the loader to add the noise to the transformations (the question is to which of those). It is better than to have lots of datasets.
For example, suppose you want to add 1% noise to some transformation T.
You have the rotation angles (must convert from the quaternion) and the translation vector.
rx, ry, rz , tx, ty, tz
Just randomize a number between 0.99 and 1.01 (for 1%) and multiply by those 6 parameters.
So, just pick some transformation (maybe the one we are optimizing), and add that percentage of noise? Should I do that on calibrate
script?
No hints, but my thoughts brain and prayers pray are with you!
Thanks @eupedrosa. Today I am particularly inspired, it must be it... :rofl:
Hi @aaguiar96 ,
yes, in the calibrate (using some argument).
Concerning which transformations to add noise to, this is from the hand-eye paper:
Optimization procedures suffer from the known problem of local minima. This problem tends to occur when the initial solution is far from the optimal parameter configuration, and may lead to failure in finding adequate parameter values. The problem is tackled by ensuring a plausible first guess for the estimated parameters. There are several parameters to be estimated as seen in (7). We make use of different initialization strategies depending on the nature of the parameter. Intrinsic and distortion parameters ( k̂ and d̂) are initialized by running a prior intrinsic camera calibration. Concerning the atomic transformations ({ T̂ k }), these can be related to the camera or the pattern as stated in (10) or (11). To initialize camera related transformations we developed an interactive tool which parses the URDF and creates an 3D visualization tool for ROS (RVIZ) interactive marker associated with each sensor. It is then possible to move and rotate the markers and position the sensor at will. This provides a simple, interactive method to easily generate plausible first guesses for the poses of the sensors. Real time visual feedback is provided to the user by the observation of the 3D models of the several components of the robot model and how they are put together, e.g. where the camera is positioned w.r.t. the end effector, or where a Light Detection And Ranging (LiDAR) is placed w.r.t. the robot. Also, for multi-sensor systems, one can observe how well the data from a pair of sensors overlap. Some examples of this procedure are given in Figure 6, and videos for the eye-on-hand 5 , eye-to-base 6 and joint-hand-base 7 use cases. In addition to this, we also provide an example of setting the initial estimate for an intelligent vehicle 8 . Concerning the atomic transformations associated with the calibration pattern ( pat T̂ i base in (10)), these are initialized by defining a new branch in the transformation tree which connects the pattern to frame to which it is fixed, e.g. for the eye-on-hand case it is
where rgb opt T pat is estimated by solving the perspective- i N-point for the detected pattern corners [29, 30, 31], and base rgb opt A i is the aggregate transformation computed by de- riving its topology from the tf tree and using the initial values for each atomic transformation in the chain.
so we have 3 different different initializations depending on the nature of the parameter. I would say you should only add noise to the atomic transformation which are listed for calibration in the yaml file, i.e. those that we set manually using rviz. So you can read the config dictionary inside the dataset to figure out which transformations you must add noise to, if a noise flag is activated.
Ok @miguelriemoliveira thanks. I'll work on that.
Hi @miguelriemoliveira and @aaguiar96.
Matlab 2020b has a new Lidar Toolbox with a Lidar and Camera Calibration module. Maybe it is worth checking it out. It references an IROS and a PAMI paper.
This is a mystery...
The problem can only be in the ICP transformation...
I performed the following test:
Read a calibrated json file, and get the sensor-to-sensor calibration from it
oframe = dataset['calibration_config']['sensors'][osensor]['link']
aframe = dataset['calibration_config']['additional_data'][asensor]['link']
selected_collection_key = dataset['collections'].keys()[0]
sensor2sensor = atom_core.atom.getTransform(aframe, oframe,
dataset['collections'][selected_collection_key]['transforms'])
Pass to it to the function to get the atomic transformation
test_tf = atomicTfFromCalibration(dataset, asensor, osensor, sensor2sensor)
First test: the atomic transformation corresponds to the one present in the json? Yes.
Second test: save the results in a json and run the metric script. Results:
Now, with the exact same code, I change only this line:
atomic_tf = atomicTfFromCalibration(dataset, asensor, osensor, M)
where instead of using the transformation from the json, I used np.dot(m2,m1)
from ICP (and all the other possible combinations).
None of them give good results...
Hi @miguelriemoliveira and @eupedrosa
I just added now the RMS images feature to camera to camera atom evaluation script. Here is the result for ATOM full:
@aaguiar96, for now remove the x and y limits, and let matplotlib do it automatically. It will provide better images.
This images already tell us something. Before calibration we have wide spreead errors with an offset. And the calibration fixes that. However, there are some (or just one?) collections with an offset. I believe this final offset is due to a skew in data synchronization.
@aaguiar96, for now remove the x and y limits, and let matplotlib do it automatically. It will provide better images.
Hi @eupedrosa. I fixed the limits because these are the images that go to the paper, and I think it is better to have the same limits in all of them.
This images already tell us something. Before calibration we have wide spreead errors with an offset. And the calibration fixes that. However, there are some (or just one?) collections with an offset. I believe this final offset is due to a skew in data synchronization.
Yes, there is one "bad" collection here. And I agree, it may be due to synchronization problems!
Hi. Setti g the same limits is a good idea. I just suggest symetric axis, e.g. -5 to 5 instead of -10 to 5.
We could remove the bad collection, but for comparison purposes with other approaches perhaps we can leave it as is..
Hi @miguelriemoliveira and @eupedrosa
Today I was writing and organizing the paper. What I've done:
What's missing:
My idea for the meeting tomorrow is to collect our global conclusions about the results, to be able to write the rest of the paper this week.
This is done.
I see today is githhub maintenance day!
Hi @miguelriemoliveira and @eupedrosa
I think I have all the tools I need to start generating all the results for agrob paper. For now, I have two questions:
and @miguelriemoliveira answered:
This seems fine to me. My only question is how do I generate that noise? I can use the first guess tool that we implemented in atom, generate first guesses, and change the unique dataset file with the first guesses generated. What do you think? Do you have a better option?
cloud compare
software, I don't know how to do it. The most direct way I'm seeing is to implement a script that receives the two pointclouds and the initial guess, and uses the ICP library to perform the alignment. What do you think?