Closed aaguiar96 closed 3 years ago
Hi @aaguiar96 ,
I like your idea. Makes sense. Concerning your questions, I think we can evaluate all methods in the same manner (this is actually mandatory to ensure a fair comparison), because all methods return as output the transformation from the lidar to the camera.
Then, for each method, you project (which will use both the calibrated transformation and the intrinsics) the lidar jump edges that belong to the pattern to the image, and measure the distance to the anotated lines (anotated once for the image and used in all methods)
Makes sense?.
Makes sense?.
Yes. So, the procedure of annotating the limits and finding the extrema points is the same for all methods. The only thing that changes is the input transformation. Makes sense. I'll start working on this test framework this afternoon.
Hi @miguelriemoliveira and @eupedrosa
I created a new directory - test
- in the root of atom repo. I'm starting with these new metrics there, and this could be the place where we put the other metrics already implemented in OptimizationUtils
.
What do you think? Can I commit this configuration?
When in doubt, create a new branch, push it and then do a pull request and ask for validation. It is dificult to guess what you did.
Ok, I'll do that
Hi @miguelriemoliveira and @eupedrosa
Can you confirm this approach to compute velodyne-to-camera
transformation from the calibration results?
# -- Get velodyne to camera transformation
from_frame = 'base_link'
to_frame = child
base_to_child = opt_utilities.getTransform(from_frame, to_frame, collection['transforms'])
print(base_to_child)
to_frame = parent
base_to_parent = opt_utilities.getTransform(from_frame, to_frame, collection['transforms'])
print(base_to_parent)
vel2cam = base_to_child * np.linalg.inv(base_to_parent)
with child as zed_left_camera_center
and parent as vlp16_frame
.
Do we have a more straightforward solution?
Do we have a more straightforward solution?
Like in ROS you can
# -- Get velodyne to camera transformation
from_frame = 'vlp16_frame'
to_frame = 'zed_left_camera_center'
vel2cam = opt_utilities.getTransform(from_frame, to_frame, collection['transforms'])
@eupedrosa is right. Just ask the transform you want directly.
Hi @miguelriemoliveira and @eupedrosa
I projected the limits points into the image, and the results seems to clearly be affected by the velodyne shitf in the calibration.
I'll make a pull request know so that you can see what I'm doing, and we can integrate it in atom.
I'll make a pull request know so that you can see what I'm doing, and we can integrate it in atom.
Done.
Hi @aaguiar96 , but this projection of points in the image is not after calibration, is it?
Hi @aaguiar96 , but this projection of points in the image is not after calibration, is it?
It is, I am using the results.json
file to get the transforms.
Something is wrong here. Can you talk in zoom?
Something is wrong here. Can you talk in zoom?
Yes.
ETA 30 secs
On Thu, 16 Jul 2020 at 09:29, André Aguiar notifications@github.com wrote:
Something is wrong here. Can you talk in zoom?
Yes.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/207#issuecomment-659248191, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVS7Z7ZCDXKX3N6SZD3R323ANANCNFSM4O2NQPGA .
Fixed. Wrong input parameter.
Maybe, we can use this metric in the optimization. That way, all errors would be in pixels, which would be fantastic in my opinion. @miguelriemoliveira, @aaguiar96 What do you think? It may not be very dificult to do it.
Maybe, we can use this metric in the optimization. That way, all errors would be in pixels, which would be fantastic in my opinion. @miguelriemoliveira, @aaguiar96 What do you think? It may not be very dificult to do it.
Great idea! :)
Hi @eupedrosa ,
not sure about that, because it goes against the philosophy of looking at each sensor alone, which is what gives us scalability.
Suppose you have a system with 4 cameras and 2 lidars. How many pairs would you have? Would you use them all? If not all, which? And what if you are calibrating a system without cameras?
Another problem is with the orthogonal error, which would not be accounted for.
But I really like the idea that we would have pixels only ...
Does ATOM right now works without a camera? If we get 2 velodynes, will it calibrate? But I understand what you are saying.
Suppose you have a system with 4 cameras and 2 lidars. How many pairs would you have? Would you use them all?
Yes, this is the achilles heel. But I would say, use them all. Apply the laser reprojection in all 4 cameras, for each lidar. Is this overkill? unecessary?
But yeah, without cameras it would no be possible to callibrate. Maybe we can create a virtual camera :laughing:
Another problem is with the orthogonal error, which would not be accounted for.
Not so sure about this, reprojection account for everything, right?
Hi @eupedrosa ,
Does ATOM right now works without a camera? If we get 2 velodynes, will it calibrate?
It does not calibrate without a camera. The only place were we need one (which we were counting of finding an alterative way of doing it) is for creating the first guess of the pattern. Right now you can calibrate for a single laser, and the camera will be used for creating the first guess. We could even say, if no cameras, first guess is random or something.
Not so sure about this, reprojection account for everything, right?
Yes, you are right. This would be accounted for.
es, this is the achilles heel. But I would say, use them all. Apply the laser reprojection in all 4 cameras, for each lidar. Is this overkill? unnecessary?
I would also vote for "use them all", but this explodes the number of residuals. Right now we already have a lot of those.
Perhaps we could talk about this? I am uncomfortable because it is a departure from the base philosophy of atom, which has the problems we always mentioned to be shortcomings of other approaches ... talk tomorrow (or tonight if you want to)?
Maybe we can leave this discussion for next week. Alow some time to reflect on this. If this departures from the base philosophy then we should not persue it.
Nonetheless, this is a nice metric for velodyne-camera error. For now, the focus should be that.
Ok. Sounds good.
On Thu, 16 Jul 2020 at 11:15, Eurico F. Pedrosa notifications@github.com wrote:
Maybe we can leave this discussion for next week. Alow some time to reflect on this. If this departures from the base philosophy then we should not persue it.
Nonetheless, this is a nice metric for velodyne-camera error. For now, the focus should be that.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/207#issuecomment-659316833, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVTITGK5QTW6XL7AHRTR33HLTANCNFSM4O2NQPGA .
Hi @miguelriemoliveira and @eupedrosa
SolvePnP
should be way more precise than this right (green dots)? Just to make sure that it is my bad.
@eupedrosa we're trying to make the "annotation" of corner lines automatic with SolvePnP
.
Hi Andre,
Not sure. Is the code there? I can take a look...
On Thu, Jul 16, 2020, 12:44 André Aguiar notifications@github.com wrote:
Hi @miguelriemoliveira https://github.com/miguelriemoliveira and @eupedrosa https://github.com/eupedrosa
SolvePnP should be way more precise than this right (green dots)? Just to make sure that it is my bad.
[image: LiDAR Reprojection_screenshot_16 07 2020] https://user-images.githubusercontent.com/35901587/87667253-eff83e00-c761-11ea-9a66-e508b29cd470.png
@eupedrosa https://github.com/eupedrosa we're trying to make the "annotation" of corner lines automatic with SolvePnP.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/207#issuecomment-659357480, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVVX6W6VSMPI6PJDY53R33RYXANCNFSM4O2NQPGA .
Not sure. Is the code there? I can take a look...
Yes @miguelriemoliveira :)
Do you find any bug @miguelriemoliveira?
Hi @miguelriemoliveira and @eupedrosa
I made some progresses.
@miguelriemoliveira I figured out why you're not obtaining the correct projection. I was obtaining it with an old commit. As soon as I pull, I got the same result as you. So, I went to see the last commits, and figured it out. The "error" is in the calibration itself. With the parameter max_nfev
on this line:
https://github.com/lardemua/atom/blob/45c90f8bccf087167de7109bfd9ebf928c908c33/atom_core/scripts/calibrate#L586-L587
the calibration is too fast, and the result is not good. As soon as I replaced it for
opt.startOptimization(optimization_options={'ftol': 1e-8, 'xtol': 1e-8, 'gtol': 1e-8, 'diff_step': 1e-3, 'x_scale':'jac'})
it works.
As solvePnP
is not being accurate enough, I developed an annotation tool. The user have to introduce four classes of points (one for each pattern size), and then the tool automatic fits the points to a polynomial. As we talked, due to the distortion, we cannot use straight lines here.
Here's the results:
NOTE: If we use the manual annotation, we must save the annotated data into a file. It's really boring to annotate...
Hi @aaguiar96 ,
that's a good advance. Congratulations. About the max_nfev, it was my mistake. I was using it to test the writing of the xacro and forgot to remove.
Some questions:
You used a polynomial of what degree? If you want to use a model you should take a look at the distortion models and use one of those. But I am not sure if a polygon (instead of a polynomial) would not be better. You would just connect the annotated points with lines ... it would have the advantage of not needing 4 classes, just one.
But, again, I am not really sure.
About the writing to file, perhaps it would be best to create a new separate json for these limit points image annotations. But I am also not entirely sure what the best option is. We could also add that information to the json ...
You should use a different color that black. Very hard to see.
You used a polynomial of what degree? If you want to use a model you should take a look at the distortion models and use one of those
I was using order three, but it's a good idea to use order five like distortion.
You would just connect the annotated points with lines ... it would have the advantage of not needing 4 classes, just one.
It is also a good idea.
About the writing to file, perhaps it would be best to create a new separate json for these limit points image annotations. But I am also not entirely sure what the best option is. We could also add that information to the json ...
I think I'll start by writing into a new json for now. If we come up with a better idea, it is easy to change this feature.
If you want to try the annotation, here's how it works.
Hi @aaguiar96 ,
don't forget to pull. Just commited.
These instructions should be printed when executing the annotation script. Can you do that? Remember, I have the memory of a snail : - )
These instructions should be printed when executing the annotation script. Can you do that? Remember, I have the memory of a snail : - )
Eheh, sure I'll do that! :smile:
Hi @miguelriemoliveira and @eupedrosa
I think I have the LiDAR-Camera metric script almost done.
Features:
evaluation.json
fileHere's a picture (where I annotated the limits wrongly on purpose) of the reprojection error:
What to do next?
Also, to the article, shouldn't these pictures become more professional? Any ideas to do that?
@aaguiar96, are you using discrete points in the data association?
Why can't we use point-to-segment distance?
What to do next?
- Should I make graphics of the reprojection error per collection?
- Should I compute the average reprojection error in all the collections?
- Should I also compute the root mean square error?
- Should I also compute the Rotation and Translation error as @eupedrosa did in hand-in-eye?
- Should I save the evaluation results in the json file also?
This is easy, yes to all.
Why can't we use point-to-segment distance?
Hi @eupedrosa. I can and I will. I have the polynomial coefficients. There may be a way of computing the point-to-curve distance.
This is easy, yes to all.
Ok nice. @eupedrosa can you give me some hints about the rotation and translation error?
Hi @aaguiar96 ,
my take:
Should I make graphics of the reprojection error per collection?
yes, similar to the hand eye paper's last graphics. The evaluation script from Afonso has code for this.
Should I compute the average reprojection error in all the collections?
Yes
Should I also compute the root mean square error?
Yes, we decide later which will be the best to display
Should I also compute the Rotation and Translation error as @eupedrosa did in hand-in-eye?
Not sure if it is possible. For the hand eye case it was more or less straightforward, but here I am not sure. @eupedrosa can help for sure.
Should I save the evaluation results in the json file also?
Hum, not sure about this one. I mean, this is just a way of analysing the results (which are in the json files). What would you like to write into a file?
Also, to the article, shouldn't these pictures become more professional? Any ideas to do that?
I like the pictures the way they are. A couple of thumb rules, though: Whenever possible don't use just color for separating items. For example, blue and be dots, green can be squares (what are the green, BTW?). Red dots are not very visible. Unite the last with the first point, to get a closed polygon.
For now this is done, right?
Hi @eupedrosa
Since you've been helping me with camera-to-camera scipt, can you also test this one just to make sure nothing is missing me? It seems to be working well.
How to run:
rosrun atom_evaluation range_sensor_to_camera_evaluation.py --train_json /home/andre-criis/Documents/saved_datasets/train_dataset/atom_calibration_vel2cam.json --test_json /home/andre-criis/Documents/saved_datasets/train_dataset/atom_calibration_vel2cam.json -ss "vlp16" -ts "right_camera" -ef "/home/andre-criis/Documents/datasets/evaluation.json" -si
When you perform the annotation for the first time, the program will save it. The next time you run the script for the same dataset, you can use the -ef
parameter to load the annotations.
Ok, maybe tomorrow I will test it.
OK, I will try to experiment this also.
Thanks guys!
Hi @miguelriemoliveira and @eupedrosa
I noticed a limitation of our approach while testing this script.
To execute it, we need also to run the calibration for the test dataset since the limit_points
are only calculated on the calibration, and not on the data collection.
Maybe in the future we can migrate the limit points calculation for the data collector right?
Yes, it should be calculated by the collector.. I believe that I already discussed that with @miguelriemoliveira. The rationale for not being calculated by the collectors was the existence of old datasets where that calculation was not present.
What we should do is put the calculation in the collector and only calculate the limit_points if not present. Eventually, the calculation in the calibrated should be deprecated.
Yes, it should be calculated by the collector.. I believe that I already discussed that with @miguelriemoliveira. The rationale for not being calculated by the collectors was the existence of old datasets where that calculation was not present.
What we should do is put the calculation in the collector and only calculate the limit_points if not present. Eventually, the calculation in the calibrated should be deprecated.
Seems fine to me!
@eupedrosa's dataset is working for LiDAR-Camera
Example for one collection:
---------------------------------------------------------------------------------------------------------------------------------
# X Error Y Error X Standard Deviation Y Standard Deviation
---------------------------------------------------------------------------------------------------------------------------------
0 4.6666 4.9902 5.8257 6.9984
So, now I can start filling the tables on the paper! :)
Maybe in the future we can migrate the limit points calculation for the data collector right?
yes, me and @eupedrosa discussed this. The best place to have that is definitely the collector_and_labeller.
Right now this is a single node, in the (distant) future we should consider splitting the collector and the labeller. Opened #230 to discuss this.
Closing this, it's solved.
Hello @miguelriemoliveira and @eupedrosa
My idea to a metric to evaluate the Velodyne-Camera calibration is:
Regarding the comparison with the state-of-the-art, I have some questions: