lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
247 stars 27 forks source link

Test ZED camera and Velodyne 3D calibration using ICP #130

Closed aaguiar96 closed 3 years ago

aaguiar96 commented 4 years ago

@miguelriemoliveira @eupedrosa

miguelriemoliveira commented 4 years ago

As discussed before you can use CloudCompare for this https://www.danielgm.net/cc/

miguelriemoliveira commented 4 years ago

... and don't forget to always add the project field to the issue, so it appears in

https://github.com/lardemua/AtlasCarCalibration/projects/2

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

Sadly I did not record the depth image topic nor the point cloud topic from zed... So, now I have to generate them from the stereo images.

Do you know any toolbox to do that?

aaguiar96 commented 3 years ago

I tried to generate a disparity image using opencv but the result is really bad...

eupedrosa commented 3 years ago

I am sorry @aaguiar96, I did not want to be the one to tell you but.... you need a new dataset !!!!!! We should use the point cloud created by the zed camera.

aaguiar96 commented 3 years ago

Ahah, I know that's the most logical solution, I was lazy... :-)

I will go to the lab next week, maybe Wednesday. I will record a new dataset. I think I will not generate more results until then, because I will have to repeat everything with the new training dataset right?

miguelriemoliveira commented 3 years ago

Hi,

you can also use the http://wiki.ros.org/depth_image_proc nodelets constellation to produce point clouds from raw images. That would avoid having to take a new bag file. If you use a camera_info msg which is build using the data from the zed factory calibration, you could say this is the "factory" calibration... Not absolutely true, since the zed api uses a different stereo algorithm, but pretty close.

@aaguiar96 , if you are really going to get a new bagfile, is there any possibility to add another rgb sensor (it could be a kinect in which you only take the rgb data) to the platform? Because I need a bagfile which would contain 3 cameras for an MSc student.

I am planning to do this on atlascar towards the end of the month, but I will need the charuco board then. You think you can take all the bags you need until the 18th?

aaguiar96 commented 3 years ago

you can also use the http://wiki.ros.org/depth_image_proc nodelets constellation to produce point clouds from raw images. That would avoid having to take a new bag file. If you use a camera_info msg which is build using the data from the zed factory calibration, you could say this is the "factory" calibration... Not absolutely true, since the zed api uses a different stereo algorithm, but pretty close.

I will try that. If it works, I do not have to generate all the results again.

@aaguiar96 , if you are really going to get a new bagfile, is there any possibility to add another rgb sensor (it could be a kinect in which you only take the rgb data) to the platform? Because I need a bagfile which would contain 3 cameras for an MSc student.

In any case, next week I will record new bag files just in case. I can record with four cameras 2 from ZED and 2 from RealSense (they are fisheye). Is it ok for you @miguelriemoliveira?

I am planning to do this on atlascar towards the end of the month, but I will need the charuco board then. You think you can take all the bags you need until the 18th?

Yes.

miguelriemoliveira commented 3 years ago

Hi André,

I think so. We could try with the fisheye. But if you can skip the bagfile altogether don't repeat it because of us

Thanks, Miguel

On Thu, 10 Sep 2020 at 09:15, André Aguiar notifications@github.com wrote:

you can also use the http://wiki.ros.org/depth_image_proc nodelets constellation to produce point clouds from raw images. That would avoid having to take a new bag file. If you use a camera_info msg which is build using the data from the zed factory calibration, you could say this is the "factory" calibration... Not absolutely true, since the zed api uses a different stereo algorithm, but pretty close.

I will try that. If it works, I do not have to generate all the results again.

@aaguiar96 https://github.com/aaguiar96 , if you are really going to get a new bagfile, is there any possibility to add another rgb sensor (it could be a kinect in which you only take the rgb data) to the platform? Because I need a bagfile which would contain 3 cameras for an MSc student.

In any case, next week I will record new bag files just in case. I can record with four cameras 2 from ZED and 2 from RealSense (they are fisheye). Is it ok for you @miguelriemoliveira https://github.com/miguelriemoliveira?

I am planning to do this on atlascar towards the end of the month, but I will need the charuco board then. You think you can take all the bags you need until the 18th?

Yes.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/130#issuecomment-690071942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVR7WVP33VCYPJ42GWDSFCDIZANCNFSM4K7VTLUA .

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

Can you test this python -c "import pcl" on your computers and tell me the output? I have pcl installed, but I'm always getting:

ImportError: libpcl_keypoints.so.1.7: cannot open shared object file: No such file or directory

It seems that Ubuntu 18.04 is prepared to use newer versions of PCL, but ROS still uses python2, so it crashes...

eupedrosa commented 3 years ago

ImportError: libpcl_keypoints.so.1.7: cannot open shared object file: No such file or directory

You have to compile the pcl module. Easy right? Wrong! But alas, I found out how to make it work. Just do

sudo apt install libpcl-dev
pip install git+https://github.com/eupedrosa/python-pcl

It will compile the python module with the current libpcl, BUT, I had to disable the visualization module.

aaguiar96 commented 3 years ago

Nice! Thanks @eupedrosa

We should add that to the requirements.txt in the future...

eupedrosa commented 3 years ago

Why do we need python-pcl in the project? It is just for the ICP right? Something that we are doing just for the paper. Addicionally, it may not be a good idea to depend of a modified package.

aaguiar96 commented 3 years ago

Yes you're write.

So, I don't think this should be on atom_evaluation/scripts/others. I'll just use it as a standalone script, just for the paper.

eupedrosa commented 3 years ago

You can still save it there. We just don't declare its dependencies.

aaguiar96 commented 3 years ago

Hi @eupedrosa and @miguelriemoliveira

This is implemented. The result seems to be really bad.

Look here.

first

rviz_screenshot_2020_09_28-11_14_58

rviz_screenshot_2020_09_28-11_15_40

I was expecting something better, but it is what it is. I am not imposing any restriction of maximum number of iterations, or thresholds!

aaguiar96 commented 3 years ago

Two things now:

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

the images are a bit confusing. I cannot understand them very well.

About the ICP, you are using some code right? Can you test with cloud compare just for one or two cases? I think it should work better (but again, images are difficult to understand). I think it is better than moving towards more complicated GICP etc algorithms.

If you can, send me the dataset with the point clouds and I will try to match with cloud compare?

Now I have to compute the entire chain of transformations from the sensor to sensor estimate. @miguelriemoliveira is this already implemented somewhere? I will also need it for OpenCV.

Is this what you mean?

https://github.com/lardemua/atom/blob/92dddeafb12d22be7aa606bcba42085d24cd21d7/atom_core/src/atom_core/atom.py#L70-L79

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira

About the ICP, you are using some code right? Can you test with cloud compare just for one or two cases? I think it should work better (but again, images are difficult to understand). I think it is better than moving towards more complicated GICP etc algorithms.

It seems to give exacly the same result. (it is expected since I am using PCL funcions)

image

But you can try out: test.zip

Is this what you mean?

https://github.com/lardemua/atom/blob/92dddeafb12d22be7aa606bcba42085d24cd21d7/atom_core/src/atom_core/atom.py#L70-L79

I don't know... What I do is:

No I have to:

Can I do that with getTranform()?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I will call you in 10/15 mins.

aaguiar96 commented 3 years ago

Ok, thanks! :)

eupedrosa commented 3 years ago

So, what are the news after the phone call?

eupedrosa commented 3 years ago

By the way @aaguiar96, can you post the command (and its parameters) you used for icp_stereo_to_lidar_calib.py. I requires the frame names and I do not know them.

aaguiar96 commented 3 years ago

So, what are the news after the phone call?

I @eupedrosa. So, about the ICP, the conclusion for the bad performance is that the point clouds are very different. One (ZED'S) is very dense and imprecise, and other (velodyne's) is sparse and precise. Both cloudCompare and PCL are giving the same result. In any case, @miguelriemoliveira will try out to tune cloudCompare's parameters to see if we can improve the performance. Also, I'll add the other methods I referenced before to the agrob paper! About the second topic (recovery of the chain of transformations), I'll try to recover the transformations marked to be calibrated using the sensor-to-sensor result from OpenCV/ICP/ZED's factory.

By the way @aaguiar96, can you post the command (and its parameters) you used for icp_stereo_to_lidar_calib.py. I requires the frame names and I do not know them.

Yes

rosrun atom_evaluation icp_stereo_to_lidar_calib.py -json "/home/andreaguiar/Documents/datasets/train_dataset/data_collected.json" -sf zed_left_camera_frame -tf velodyne -rf base_link -tpcl "/home/andreaguiar/Documents/datasets/train_dataset/vlp16_0.pcd" -spcl "/home/andreaguiar/Documents/datasets/train_dataset/zed_point_cloud_0.pcd"

I posted the training and test dataset here before, you can use it.

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I think I got a bit better results, take a look:

https://youtu.be/kYYHYsDGelY

we can discuss tomorrow ...

aaguiar96 commented 3 years ago

That looks really better!

Thanks, I'll call you tomorrow then.

aaguiar96 commented 3 years ago

Hello @miguelriemoliveira and @eupedrosa

Can somebody help me performing a slerp of many quaternions? I am trying to use scipy slerp but I do not get what the times parameter means...

miguelriemoliveira commented 3 years ago

Hi

I can try tonight.

I miss a good challenge :)

On Tue, Oct 6, 2020, 20:29 André Aguiar notifications@github.com wrote:

Hello @miguelriemoliveira https://github.com/miguelriemoliveira and @eupedrosa https://github.com/eupedrosa

Can somebody help me performing a slerp of many quaternions? I am trying to use scipy slerp https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.transform.Slerp.html but I do not get what the times parameter means...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/130#issuecomment-704504918, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVUIIHGJYL3KBMMPYODSJNVZZANCNFSM4K7VTLUA .

aaguiar96 commented 3 years ago

Thanks @miguelriemoliveira :)

eupedrosa commented 3 years ago

I am trying to use scipy slerp but I do not get what the times parameter means...

The times parameter defines the interpolation point. Like in an linear interpolation. You have A em B and you interpolate between [0, 1].

The idea for SLERP is the same. You provide rotation A and B, and get interpolation by providing a value between 0 and 1. However, I believe that scipy offers an API that is more generic. You provide a rotation and its associated point in time. Why time? Because the most common usage for SLERP is to smooth rotation animation over time.

aaguiar96 commented 3 years ago

The times parameter defines the interpolation point. Like in an linear interpolation. You have A em B and you interpolate between [0, 1].

The idea for SLERP is the same. You provide rotation A and B, and get interpolation by providing a value between 0 and 1. However, I believe that scipy offers an API that is more generic. You provide a rotation and its associated point in time. Why time? Because the most common usage for SLERP is to smooth rotation animation over time.

Hi @eupedrosa, thanks. After reading a little bit more, I interpreted it the same way. I implemented some code, but I'm not sure it's working.

I'm waiting for @miguelriemoliveira's solution to compare.

eupedrosa commented 3 years ago

Can you post your solution?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

this is what we did a couple of years ago.

https://github.com/lardemua/Optimization/blob/478e877e1d380550075839c3418af371fafa98b4/OpenConstructorOptimization/costFunctions.py#L15-L54

The transformations are represented a bit differently, each defined by a tuple ( (tx, ty, tz), (qw, qx, qy, qz) ), and the function receives as input a list of those transformations.

The slerp is then done incrementally. Take a look and tell me if it makes sense to you.

It also computes the average of the translation as a normal average of x, y and z values.

If you like this one we could copy it into our atom core functions.

aaguiar96 commented 3 years ago

Can you post your solution?

Is something like this:

key_times = range(0,N)
quats = [quat1, quat2, ..., quatN]
key_rots = R.from_quat(quats)
slerp = Slerp(key_times, key_rots)

final_quat = slerp([N/2], key_rots).as_quat()

What do you think? My idea was to interpolate on the medium point, but I don't know if it is correct.

aaguiar96 commented 3 years ago

Hi @aaguiar96 ,

this is what we did a couple of years ago.

https://github.com/lardemua/Optimization/blob/478e877e1d380550075839c3418af371fafa98b4/OpenConstructorOptimization/costFunctions.py#L15-L54

The transformations are represented a bit differently, each defined by a tuple ( (tx, ty, tz), (qw, qx, qy, qz) ), and the function receives as input a list of those transformations.

The slerp is then done incrementally. Take a look and tell me if it makes sense to you.

It also computes the average of the translation as a normal average of x, y and z values.

If you like this one we could copy it into our atom core functions.

I @miguelriemoliveira, as long as it works it's awesome! I'll copy to atom_core and test then!

Thanks

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I @miguelriemoliveira, as long as it works it's awesome!

it did work for an M.Sc. thesis, so it should work.

What do you think? My idea was to interpolate on the medium point, but I don't know if it is correct.

I don't know if you can do that. What does the function key_rots = R.from_quat(quats) do exactly?

aaguiar96 commented 3 years ago

it did work for an M.Sc. thesis, so it should work.

Great, I'll use it then.

I don't know if you can do that. What does the function key_rots = R.from_quat(quats) do exactly?

It creates an intance of the class Rotation to use in the slerp function.

miguelriemoliveira commented 3 years ago

I don't know if you can do that. What does the function key_rots = R.from_quat(quats) do exactly?

It creates an intance of the class Rotation to use in the slerp function.

yes, but what I do not understand is how this is done with an input of a list of quaternions, rather than a quaternion.

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

@miguelriemoliveira's approach worked. But now I'm stuck at ICP...

So, I have a script that reads the transformations obtained in cloud compare. I apply the transformations to the source pcl and visualize in RVIZ that it alignes with the target pcl.

However, when I input this transformation in the getAtomicTfFromCalibration function, the output is strange... I suspect that this is because in an error in the links, since it is confusing if the zed pcl is or not in the optical frame.

I tested the getAtomicTfFromCalibration function one more time with the sensor-to-sensor transformation extracted from the json and it works. As expected, it outputs the transformation also present in the json.

Any ideas? Do you want me to commit the code and give you the transformations obtained from icp for you to test? I have not ideas right now...

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira

@eupedrosa helped me solving the problem... The problem was that I did not pay attention to CloudCompare, so, I was aligning manually sensor A to sensor B, and then performing the ICP from sensor B to sensor A... So, the solution was M = np.dot(m2, np.linalg.inv(m1)).

This one was tricky. Thanks, guys for the patience! :)

Now I'm able to generate the results for ICP!

miguelriemoliveira commented 3 years ago

Great news!!!

On Fri, Oct 9, 2020, 19:36 André Aguiar notifications@github.com wrote:

Hi @miguelriemoliveira https://github.com/miguelriemoliveira

@eupedrosa https://github.com/eupedrosa helped me solving the problem... The problem was that I did not pay attention to CloudCompare, so, I was aligning manually sensor A to sensor B, and then performing the ICP from sensor B to sensor A... So, the solution was M = np.dot(m2, np.linalg.inv(m1)).

This one was tricky. Thanks, guys for the patience! :)

Now I'm able to generate the results for ICP!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/130#issuecomment-706340795, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVSJ7GERQL6BAKBCMUDSJ5JZHANCNFSM4K7VTLUA .

aaguiar96 commented 3 years ago

Results for ICP generated. It was a long run!

Closing this.

miguelriemoliveira commented 3 years ago

Congratulations!

On Mon, Oct 12, 2020, 17:55 André Aguiar notifications@github.com wrote:

Results for ICP generated. It was a long run!

Closing this.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/130#issuecomment-707234794, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVVSKRVMX42DPW72RB3SKMYHRANCNFSM4K7VTLUA .