The term adversarial attack usually has a broader definition than the intention of ML01. For example it usually includes data poisoning.
The intention seems to refer to what is more often called 'evasion attack'. The problem with that term is that it usually means small changes to the input. This is why in the AI guide we used the term 'input manipulation', which is more clear.
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Type
Suggestions for Improvement
What would you like to report?
The term adversarial attack usually has a broader definition than the intention of ML01. For example it usually includes data poisoning. The intention seems to refer to what is more often called 'evasion attack'. The problem with that term is that it usually means small changes to the input. This is why in the AI guide we used the term 'input manipulation', which is more clear.
Code of Conduct