ntu-aris / MMAUD

This package provides A Comprehensive Multi-Modal Anti-UAV Dataset
MIT License
38 stars 7 forks source link

Can you provide the relative translation and rotation relationship of each sensor during the collection of the dataset? #2

Closed lcjBegin closed 2 weeks ago

lcjBegin commented 7 months ago

The lack of relative positional relationships between various sensors makes it difficult to coordinate the application of multimodal information: translation and rotation relationships between livox-avia and Leica Nova MS60 (ground truth), lidar and Leica Nova MS60 (ground truth), fisheye cameras (left and right), and Leica Nova MS60 (ground truth)

snakehaihai commented 6 months ago

MMAUD V1 (collected at rooftop with total raw data of 250G, after compression is < 80G) is already open. V2 (Collect at carpark for CVPR challenge trial, roughly 800G) is compressing V3 (Collect at carpark for CVPR challenge Final, roughly 500G) is pending for compressing

For V1 you may try to download.

This is an issue that we know. Thats why we collected V2 and V3 sequences with MS60 calibrated to the Rig.

For the calibration, there is inter-camera calibration https://drive.google.com/drive/folders/1wk-c5xVX6701WNI_In1ba3_D4LSjRYv5.

Initially, I thought this work was about using NN to fit the multi-sensor observations to the ground truth. Since you asked about it, I'll have to think of a solution for this. Between each lidar, there is minimal FOV overlap. The CAD file is already available in the readme link for you to download for raw calibration. For refinement, I'll try running MLCC from the HKU team and see if I can get a good result. I'll update this after ICRA.