Closed wenboDong0917 closed 3 years ago
Do we need the ALL_LiDAR_vertices folder when we use our own dataset? and how can I create the dataset as what you porvided?
Hi,
Thank you for using our repo! As your title, what corners you are talking about? Are those camera corners or LiDAR vertices? For the camera corners, you need to click on the image and find the coordinates. For LiDAR vertices, this package will optimize them for you by setting (opts.optimizeAllCorners = 1) will do the job. Please also follow here when you try to calibrate your system.
Please let me know if you have other problems!
Thanks for your reply and I want to know if I need the ALL_LiDAR_vertices folder when I use my own dataset? for I have tried your repo and it works and how can I create the dataset as what you porvided in order to calibrate my system?
------------------ 原始邮件 ------------------ 发件人: "UMich-BipedLab/extrinsic_lidar_camera_calibration" @.>; 发送时间: 2021年4月27日(星期二) 晚上11:17 @.>; 抄送: " @.**@.>; 主题: Re: [UMich-BipedLab/extrinsic_lidar_camera_calibration] Thanks for your contributions, and can you tell how the corners are extracted? (#16)
Hi,
Thank you for using our repo! As your title, what corners you are talking about? Are those camera corners or LiDAR vertices? For the camera corners, you need to click on the image and find the coordinates. For LiDAR vertices, this package will optimize them for you by setting (opts.optimizeAllCorners = 1) will do the job. Please also follow here when you try to calibrate your system.
Please let me know if you have other problems!
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Hi,
Please follow the instruction here to collect your datasets. You don't need the ALL_LiDAR_vertices for your own datasets at first. The software will create one and save the LiDAR vertices for you. Please let me know if you have other questions!
OK! Thanks again, I find that the .mat files have various types,and I would appreciate if you can tell me how to creat the .mat files such as the type of 'full-pc-.mat' as well as the type of 'velodyne_points-EECS3--2019-09-06-06-19.mat' couse I don't know what's datas in that .mat. I have got the mat files like 'big/med/small/-.mat' by using the bag2mat.py.
Hi,
That's great that you used bag2mat.py
to convert the data already! Please take a look at getBagData.m. There are two types of data:
I) TestData
is for testing/visulization, which does not contain calibration targets. Take TestData(1)
for example,
TestData(1).bagfile = "EECS3.bag";
--> The bagfile you collected for a testing scene.
TestData(1).pc_file = "velodyne_points-EECS3--2019-09-06-06-19.mat";
--> the full set of point cloud extracted from the bagfile using bag2mat.py
.
II) BagData
is for training and validation, which does contain calibration targets. You need to know how many calibration targets are in the scene and the size of each target. For each target, we need LiDAR returns on the target. In other words, a patch of the LiDAR point cloud on the target. This package will use the patch to estimate the LiDAR vertices for you. We also need corner coordinates on the image with the order of top-left-right-bottom.
P.S. If you only need to do calibration once, use bag2mat.py might be faster but if you plan to do it many times, it is recommended to use LiDARTag package to extract returns The LiDAR returns could be extracted by or bag2mat.py
Take BagData(2)
for example, it contains two calibration targets and the bagfile collected for the scene. You will have the following information:
BagData(2).bagfile = "lab2-closer.bag";
--> The bagfile you collected for a calibration/validation scene.
BagData(2).num_tag = 2;
--> How many calibration tartets are in the scene.
BagData(2).lidar_full_scan = "velodyne_points-lab2-full-pc--2019-09-05-23-20.mat";
--> full scan of point cloud of the scene extracted by bag2mat.py
, or LiDARTag package.
BagData(2).lidar_target(1).pc_file = 'velodyne_points-lab2-closer-big--2019-09-05-21-51.mat';
--> LiDAR returns on the first calibration target. You can use LiDARTag package to extract the LiDAR returns.
BagData(2).lidar_target(1).tag_size = 0.8051;
--> The size of the first calibration target
BagData(2).camera_target(1).corners = [340, 263, 406, 316; 236, 313, 341, 417; 1, 1, 1, 1];
--> Image coordinates of the first calibration target
BagData(2).lidar_target(2).pc_file = 'velodyne_points-lab2-closer-small--2019-09-05-21-53.mat';
--> LiDAR returns on the second calibration target . You can use LiDARTag package to extract the LiDAR returns.
BagData(2).lidar_target(2).tag_size = 0.158;
--> The size of the second calibration target
BagData(2).camera_target(2).corners = [197, 153, 220, 176; 250, 273, 292, 315; 1, 1, 1, 1];
--> Image coordinates of the second calibration target
Hi,
I am going to close this issue. Please feel free to reopen it if you encounter related issues.
I'm so sorry to reply you so late, because I was busy with other things a few days ago, I used the bag2mat.py you provided to extract the mat files, but the files I extracted are not the same as the ones you provided, but most of them are the same. Only a few of them are different, I don’t understand the reason, if you can help me, I would be very grateful.
Hi,
That depends on how you extract the point cloud. If you include more points from the patch of a point cloud, it will be different.
as the title