Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks].
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.79k stars 337 forks source link

[QUESTION] About check_validity function #158

Closed WWWWWLI closed 11 months ago

WWWWWLI commented 11 months ago

❔ Any questions

May I ask what is the purpose of determining in the check_validity function that len(set(ids)) ! = 1 is intended. However, there is a case where the same parameters are used to attack different models, so that the generated adversarial samples can be attacked as many different types of models at the same time as possible, to get more aggressive adversarial samples. This requirement cannot be realized in code. Therefore I have made a distinction between len(set(ids)) ! = 1 limit is confusing, looking forward to a reply, thanks for your contribution.

rikonaka commented 11 months ago

Hi @WWWWWLI , well, I think the purpose of this restriction is to attack the same model by different attack methods, and to compare to get a better perturbation making it more aggressive for some special tasks (e.g. black-box models).

Do you mean you need a method that using the same attack method to attack many different models to get a strong adversarial sample? Because of the name of this class multiattack, its based on a multiattack method to attack a same model, maybe you can implement a multimodel based on this to achieve your functionality. 🤪🤪🤪

WWWWWLI commented 11 months ago

OK, thank you very much.