hku-mars / mlcc

Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras
GNU General Public License v2.0
490 stars 108 forks source link

Results of the same data vary a lot from different initial extrinsics between camera and lidar #30

Open Jackiezhou233 opened 8 months ago

Jackiezhou233 commented 8 months ago

Devices: livox mid360 and a fisheye camera (182degree FOV) I test the single pose method as well as mult-poses methods using different initial extrins data. The optimization loop looks fine since the cloud edges keep approaching the rgb edges. The matching results judging from the image looks good (cloud edge overlap with the rgb edge very well). However, the results still experience big difference (around 10 cm difference) when I slightly change the initial extrinsics (from 0~10 cm). The cloud edge and rgb edge is extracted well since we manually made objects with clear geometric features for calibration.

I wonder what kind of scene is more suitable for this algorithm and why the result varies so much even though the edges overlap with each other well?

2

lsangreg commented 8 months ago

hi, did you cumulate your point cloud?

Jackiezhou233 commented 7 months ago

yes I have test accumulated pointcloud as well as single scan cloud. Both results vary a lot from different initial value.

lsangreg commented 7 months ago

@Jackiezhou233 Yes I am having the same issue. In particular when I introduce errors in the rotation. With accumulation the result is much better as the lidar edges makes more sense, but still the algorithm tends to match edges that are not semantically correct. Parameters search also lead to better results.