Open marcoCalipari opened 1 month ago
Hi @marcoCalipari. The evaluate.py
should draw some figures about attacks and defenses under the result folder, by default /result
. You can also look at the functions in /mvp/visualize
. scripts/visualize.py
is kind of deperated, sorry for that.
Basically, in scripts/evaluate.py
, it does
from mvp.visualize.attack import draw_attack
from mvp.visualize.defense import visualize_defense, draw_roc
Hello!
I successfully retrieved the test dataset from UCLA Box and ran the system. I executed the evaluation script, which logs the scores for various experiments.
However, I encountered a couple of issues that I hope you can help clarify:
2) Visualizing the Results:
I attempted to use the Visualize.py script for this purpose, but it seems it is intended for the full OPV2V dataset. However, the test dataset doesn't seem to have the required "train/validate/test" folder structure.
Additionally, since the provided code uses pre-trained models, I'm unsure if the full OPV2V dataset is necessary.
Given the current codebase, could you clarify: Using the test dataset, What is the correct pipeline for visualizing the before and after attack images or LiDAR point clouds?
Any help would be greatly appreciated!
Thank you!