Using ATOM framework to perform full calibration of a mobile manipulator with two cameras, one on the AGV and other on the end-effector of the manipulator and one 3D lidar on the AGV.
Aims to determine the transformations:
Manipulator arm base w.r.t. the AGV.
RGB1 w.r.t. the AGV.
RGB2 w.r.t. the end-effector.
3D Lidar w.r.t. the AGV
The cameras used was a depth astra.
The 3D lidar is a velodyne VLP 16.
The calibration used a total of 30 collections.
The output is a URDF file with the optimized poses of the sensors.
Summary:
Calibration tree and Transformations:
Videos:
Calibration Results per collection:
Results from the simulation using noise initial guess
nig
0.1 0.1Ground truth frame evaluations using different datasets
Lidar to AGV camera evaluation (same dataset - Atom_calibration)
Lidar to AGV camera evaluation (different dataset - Dataset_corrected vs Atom_calibration)
Eye-on-hand camera to AGV camera evaluation using different datasets (Dataset_corrected vs Atom_calibration)