ika-rwth-aachen / MultiCorrupt

MultiCorrupt: A benchmark for robust multi-modal 3D object detection, evaluating LiDAR-Camera fusion models in autonomous driving. Includes diverse corruption types (e.g., misalignment, miscalibration, weather) and severity levels. Assess model performance under challenging conditions.
MIT License
41 stars 5 forks source link

Evaluation of the robustness #5

Closed Barath19 closed 4 months ago

Barath19 commented 4 months ago

Hello Till,

I have successfully created the corruption dataset for just spatialmisalignment and for all the three severity level. However. when I run the evaluation for BEVFusion (mmdetection3d) I see no difference in the result compared the clean version. I assume the template evaluation script provided does a symbolic link to the root nuscenes dataset and the corruption dataset. Is it possible to run the evaluation just for spatialmisalignment. In that case the output folder just contains the LIDAR_TOP folder, how does the symbolic link work in this case.

TLDR; how can we use the generated corruption data to run the evaluation. Apparently the results for all the severity is the same (i.e. NDS 0.7113).

Best, Barath

TillBeemelmanns commented 4 months ago

The template evaluation script does a symbolic link from

multicorrupt_root="/workspace/multicorrupt/"

to

nuscenes_data_dir="/workspace/data/nuscenes"

i.e.

ln -s "$multicorrupt_root/$corruption/$severity" "$nuscenes_data_dir"

You would need to double check the pathes and check in the terminal if the symbolic link is actually working. And you would need to check where SparseFusion is actually accessing the dataset, otherwise the framework will run the evaluation on original dataset.

Also make sure, that directories maps and v1.0-trainval is copied into the respective directories, like this

image

Can give me more details of your file structure, otherwise it is difficult to help you.

Barath19 commented 4 months ago

image

My evaluation script points to spatialmisalignment/1/

I have copied the maps and v1.0-trainval. In addition, I had to add the .pkl file aswell because I am using mmdetection3d (BEVFusion-lidar_cam) for evaluation. However, I get issue

image

This is because, when multicorrupt creates the data for spatial misalignment it only generates the LIDAR_TOP not the other sensors. Let me know if am doing any mistake or its not possible to run BEVFusion lidar-cam on spatialmisaglinment dataset.

TillBeemelmanns commented 4 months ago

The corrupted dataset of severity 1 needs to have the same structure as the original nuScenes dataset. Just the corrupted files need to be replaced. So apparently CAM_FRONT and the other camera dirs are missing in your setup. The structure needs to like the following.

1
|-- maps
|   |-- 36092f0b03a857c6a3403e25b4b7aab3.png
|   |-- 37819e65e09e5547b8a3ceaefba56bb2.png
|   |-- 53992ee3023e5494b90c316c183be829.png
|   `-- 93406b464a165eaba6d9de76ca09f5da.png
|-- samples
|   |-- CAM_BACK
|   |-- CAM_BACK_LEFT
|   |-- CAM_BACK_RIGHT
|   |-- CAM_FRONT
|   |-- CAM_FRONT_LEFT
|   |-- CAM_FRONT_RIGHT
|   `-- LIDAR_TOP
|-- sweeps
|   `-- LIDAR_TOP
`-- v1.0-trainval
    |-- attribute.json
    .
    .
    .

You can simply copy and paste the missing directories from the original dataset there, or create a soft link. Then it should work.

I will check the code and see how I can automate this. But I will probably have time for that next month, earliest.

Barath19 commented 4 months ago

Oh cool...sorted!! I guess this was missing in my understanding of the evaluation script. Thanks alot for feedbacks. I am working on the same. If you can update the contribution guidelines I can create a pull request when completed.