I really appreciate your implementation.
As shown in notebook examples, I successfully made attack codes using using art.attacks.evasion.ZooAttack for five different decision trees,
DecisionTreeClassifier, GradientBoostingClassifier, RandomForestClassifier, AdaBoostClassifier from sklearn and XGBoost . It works well!
Now I'm trying to make other attacks such as FGSM, Papernot's attack, and Kantchelian’s attack.
I think this library has FGSM attack, but it's based on BaseEstimator. How can I use them for the above classifiers?
The art.attacks.evasion.DecisionTreeAttack seems like Papernot's attack. However, I found an error when I try to implement it with other classifiers except DecisionTreeClassifier.
There's no implementation of Kantchelian’s attack ?
Hi @hin1115 Thank you very much for your appreciation for ART!
FGSM is an attack that requires loss gradients, usually calculated bu backpropagating the loss to the model input. It is not possible to backpropagate gradients through a decision-tree-based model. Therefore decision tree models are not compatible with attacks requiring loss gradients.
Yes, art.attacks.evasion.DecisionTreeAttack is Papernot's attack and by its definition only works with a single decision tree.
I was not aware of Kantchelian’s attack. I think we could add it to ART. Would you be interested to try implementing it?
I really appreciate your implementation. As shown in notebook examples, I successfully made attack codes using using
art.attacks.evasion.ZooAttack
for five different decision trees,DecisionTreeClassifier
,GradientBoostingClassifier
,RandomForestClassifier
,AdaBoostClassifier
from sklearn andXGBoost
. It works well! Now I'm trying to make other attacks such as FGSM, Papernot's attack, and Kantchelian’s attack.BaseEstimator
. How can I use them for the above classifiers?art.attacks.evasion.DecisionTreeAttack
seems like Papernot's attack. However, I found an error when I try to implement it with other classifiers exceptDecisionTreeClassifier
.