Robotics-BUT / Brno-Urban-Dataset-Tools

Repository contains the basic tools to process the Brno Urban Dataset raw data
6 stars 1 forks source link

Lidar to Camera Projection Example #4

Open jgrnt opened 4 years ago

jgrnt commented 4 years ago

I currently try to use the lidar information together with the cameras, but I got stuck figuring out the right transformations.

@adamek727 mentioned a lidar to camera projection example in #1. To you still plan to release this. This would be super helpful

Thank you

adamek727 commented 4 years ago

Hi, thanks for your interest. However, sorry for the delay, but we have some complications with a public licence for the software created at the university. I hope we should be able to release it in several weeks.

About the calibrations, pleas take look at https://github.com/Robotics-BUT/Brno-Urban-Dataset-Calibrations/tree/7d859584c7d8a0c2007285a1da04e1771b824544 Here you can find calibration parameters for each camera, and in the frames.yaml file, there are translations and rotations of all the sensors w.r.t. the IMU unit. So if you want to transform point from lidar frame to camera frame, you have to apply the inverse lidar tf and forward camera tf. Than you can use the common opencv method of projecting 3D points into the 2D plain.

Please let me know if there would be any problem.

hitdshu commented 3 years ago

Hi, thanks for your interest. However, sorry for the delay, but we have some complications with a public licence for the software created at the university. I hope we should be able to release it in several weeks.

About the calibrations, pleas take look at https://github.com/Robotics-BUT/Brno-Urban-Dataset-Calibrations/tree/7d859584c7d8a0c2007285a1da04e1771b824544 Here you can find calibration parameters for each camera, and in the frames.yaml file, there are translations and rotations of all the sensors w.r.t. the IMU unit. So if you want to transform point from lidar frame to camera frame, you have to apply the inverse lidar tf and forward camera tf. Than you can use the common opencv method of projecting 3D points into the 2D plain.

Please let me know if there would be any problem.

Hi, thanks for your excellent project. I could not figure out the intrinsic parameters of your camera calibration. Is the distortion coeffs k1/k2/p1/p2/k3 in opencv format?

Thanks, Deshun

adamek727 commented 3 years ago

Yes, as you mentioned. It is the common opencv format.

hitdshu commented 3 years ago

Yes, as you mentioned. It is the common opencv format.

Thanks for your timely reply and kindness.

qingzi02010 commented 3 years ago

Hi, thanks for your interest. However, sorry for the delay, but we have some complications with a public licence for the software created at the university. I hope we should be able to release it in several weeks. About the calibrations, pleas take look at https://github.com/Robotics-BUT/Brno-Urban-Dataset-Calibrations/tree/7d859584c7d8a0c2007285a1da04e1771b824544 Here you can find calibration parameters for each camera, and in the frames.yaml file, there are translations and rotations of all the sensors w.r.t. the IMU unit. So if you want to transform point from lidar frame to camera frame, you have to apply the inverse lidar tf and forward camera tf. Than you can use the common opencv method of projecting 3D points into the 2D plain. Please let me know if there would be any problem.

Hi, thanks for your excellent project. I could not figure out the intrinsic parameters of your camera calibration. Is the distortion coeffs k1/k2/p1/p2/k3 in opencv format?

Thanks, Deshun

Hello, thanks for your great work firstly! I have several questions when fusing lidar and image. 1)From frame.yaml, lidar_left: trans: [0.17, 0.48, 0.15] rot: [0.0012719, -0.0632113, -0.9977974, 0.0200754], what does the values in "rot" mean, θ , sin(θ) or something else? And what does the fourth value"w" in "rot" mean? 2) I am a beginner in this area, can you recommend some websites to fuse lidar and camera, with the parameter style provided in this project. Should I use ros package or python-opencv?

adamek727 commented 3 years ago

Hi, thanks for your interest. You are welcome. 1) the rotation is expressed as quaternion. It is some kind of "3D complex number". Compared to the rotation matrices and euler angels, quaternion does not have the singularity problem. To see, how it works, please see https://www.youtube.com/watch?v=zjMuIxRvygQ or play with https://quaternions.online/. 2) As a first, I would recommend to understand, how the lidar is projected into the camera. http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf.

qingzi02010 commented 3 years ago

Hi, thanks for your interest. You are welcome.

  1. the rotation is expressed as quaternion. It is some kind of "3D complex number". Compared to the rotation matrices and euler angels, quaternion does not have the singularity problem. To see, how it works, please see https://www.youtube.com/watch?v=zjMuIxRvygQ or play with https://quaternions.online/.
  2. As a first, I would recommend to understand, how the lidar is projected into the camera. http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf.

Got it! Thanks for your reply and kindness!

fferflo commented 2 years ago

Thanks for the great work!

Do you have an update on the lidar-to-image projection script?

adamek727 commented 2 years ago

Hi! Sorry we have currently no implementation in python, but we handled these topic and many others in related project available here. It is a C++ project, but you can at least find out, how the math is used here. https://github.com/Robotics-BUT/Atlas-Fusion