zqzqz / AdvCollaborativePerception

Repo for USENIX security 2024 paper "On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures" https://arxiv.org/abs/2309.12955
Apache License 2.0
11 stars 1 forks source link

How to reproduce experiments as in the paper #3

Open marcoCalipari opened 1 month ago

marcoCalipari commented 1 month ago

Hello!

I successfully retrieved the test dataset from UCLA Box and ran the system. I executed the evaluation script, which logs the scores for various experiments.

However, I encountered a couple of issues that I hope you can help clarify:

  1. Lack of Visual Confirmation:

2) Visualizing the Results:

Given the current codebase, could you clarify: Using the test dataset, What is the correct pipeline for visualizing the before and after attack images or LiDAR point clouds?

Any help would be greatly appreciated!

Thank you!

zqzqz commented 1 month ago

Hi @marcoCalipari. The evaluate.py should draw some figures about attacks and defenses under the result folder, by default /result. You can also look at the functions in /mvp/visualize. scripts/visualize.py is kind of deperated, sorry for that.

Basically, in scripts/evaluate.py, it does

from mvp.visualize.attack import draw_attack
from mvp.visualize.defense import visualize_defense, draw_roc