Open billbradley opened 1 year ago
PS: I meant to mention that there seem to be some hints of this problem in the past, to wit: https://github.com/Trusted-AI/adversarial-robustness-toolbox/discussions/1339
PPS: This issue should certainly have the "bug" label, but I didn't see how to add that; if anyone could add it for me, I'd be grateful.
Are these issues actively monitored? I'd be happy to improve my bug report to make it more helpful, but I'm not sure what to change.
Hi @billbradley Yes, the issues are actively monitored. We did not yet have time to take a closer look at it. Did you see any cause for the negative values in x
?
No, I didn't understand where the negative values came from. Honestly, I found it pretty surprising.
Describe the bug The Wasserstein attack produces NaNs in the output.
To Reproduce I believe I've produced a minimal example of the issue. You can run it on Google's Colab here: https://drive.google.com/file/d/1GoikJzRJAdJjnAb1j2SB8Tu453ZxIAsi/view?usp=sharing
The code includes both a Fast Gradient Method attack and the Wasserstein attack; the FGM attack runs fine and hopefully establishes that there aren't any errors in the input processing.
Note that running the code produces the warnings:
In the current ART code, we have:
If we replace that with:
then the warnings disappear and the output is finite (i.e., no NaNs). However, I don't know what I'm doing, in terms of algorithmics or numerical analysis, so I wasn't comfortable making that switch.
For completeness, I'm also including a Python script version of the Jupyter notebook:
Expected behavior Given a non-pathological input image, I would expect the Wasserstein attack to produce non-NaN output.
Screenshots (None.)
System information I replicated the problem on Google's Colab, which is presumably running Linux, but here are my own system details: