AI-secure / VeriGauge

A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]
88 stars 7 forks source link

robustness comparison for undefended classifiers #1

Closed kartikgupta-at-anu closed 4 years ago

kartikgupta-at-anu commented 4 years ago

Can you suggest any of these certification methods be used for comparing robustness of two undefended (not trained on smooth/noisy images) networks (like ResNet), ideally without backpropagating through the network (treating n/w as blackbox), to avoid suffering from gradient masking?

llylly commented 4 years ago

From my experience of usage, for undefended small models, the complete verification approaches such as MILP verifier and AI2 are good choices. Because their tight bounds exactly show the distance of the closest adversarial samples. For undefended large models, these certification tools may not be useful. Because the complete verification approaches cannot finish in a reasonable time, and incomplete verification approaches are too conservative to be used as comparison criteria. In this situation, I would suggest some heuristic such as CLEVER score (https://arxiv.org/pdf/1801.10578.pdf).

kartikgupta-at-anu commented 4 years ago

Thanks for the prompt response and suggestion. I think CLEVER being dependent on input gradients of network, suffers from gradient masking issues. Correct me if I'm wrong.

llylly commented 4 years ago

Yes, you are right - CLEVER is dependent heavily on the input gradients. But for undefended classifiers, they may not use gradient obfuscation or masking so I guess it should be fine.