Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.78k stars 1.16k forks source link

A Little Suggestion: Add noise return for UniversalPerturbation #758

Closed dustinjoe closed 3 years ago

dustinjoe commented 3 years ago

I am exploring usage of Universal Perturbation. Wondering if the perturbation noise could be returned together? Because the range clipping exists, returned data minus original data may not be universal for all data, it might be more convinient to return the noise together to test transferability or on other samples.

Also, a little confused about the 'eps' setting, what would be the appropriate setting for 'eps' in UniversalPerturbation and the 'eps' inside the 'attacker_params' of the attacker like "deepfool" or 'fgsm'?

Thank you!

beat-buesser commented 3 years ago

Hi @dustinjoe Thank you very much for your interest in ART!

I just looked again at the implementation of UniversalPerturbation and I think the idea of your proposal might have been the reason for lines 181-183: https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/3cf890d14335a02d2fae0442d7f163d019fb4f5e/art/attacks/evasion/universal_perturbation.py#L183

If this solves your proposal I would suggest that we make these three attributes properties of UniversalPerturbation.

To your question on the eps parameters and based on the current implementation of UniversalPerturbation: I think the eps of UniversalPerturbation defines the magnitude of the universal perturbation/noise whereas the eps of the attacker in attacker_params defines the magnitude of perturbation in the sphere around sample + current noise where the attack is allowed to search for a better noise in each iteration.

dustinjoe commented 3 years ago

Thanks for fast reply! This solves my issues.