git-disl / TOG

Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This project presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully.
121 stars 41 forks source link

No Object Detection Metrics Computation #18

Closed nikunjpansari closed 2 years ago

nikunjpansari commented 2 years ago

Hi, I have gone through the code. I found eval_tools.py in frcnn_utils which contains the metrics computation code. I was in the assumption that these metrics are computed after executing the attack, but none of the models with an attack or without an attack (Benign or Adversarial) are giving any metrics.

Can you please clarify this? Because in the paper you have mentioned the mAP values but in the main code, it doesn't seem to be included.

If you have any updated code, please share that!!

khchow-gt commented 2 years ago

You may take a look at this issue: #10