Open Jackiezhou233 opened 8 months ago
hi, did you cumulate your point cloud?
yes I have test accumulated pointcloud as well as single scan cloud. Both results vary a lot from different initial value.
@Jackiezhou233 Yes I am having the same issue. In particular when I introduce errors in the rotation. With accumulation the result is much better as the lidar edges makes more sense, but still the algorithm tends to match edges that are not semantically correct. Parameters search also lead to better results.
Devices: livox mid360 and a fisheye camera (182degree FOV) I test the single pose method as well as mult-poses methods using different initial extrins data. The optimization loop looks fine since the cloud edges keep approaching the rgb edges. The matching results judging from the image looks good (cloud edge overlap with the rgb edge very well). However, the results still experience big difference (around 10 cm difference) when I slightly change the initial extrinsics (from 0~10 cm). The cloud edge and rgb edge is extracted well since we manually made objects with clear geometric features for calibration.
I wonder what kind of scene is more suitable for this algorithm and why the result varies so much even though the edges overlap with each other well?