Closed ziqi-zhang closed 2 years ago
I've updated the PGD attack and you can try it again :)
I tried the new PGD attack and the result is promising!
Training...
100/100 [====================] - Total: 534.18ms - 5ms/step- loss: 0.0835 - acc: 0.984 - val_loss: 0.586 - val_acc: 0.815
Test node clean performance
Evaluating...
1/1 [====================] - Total: 1.79ms - 1ms/step- loss: 0.626 - acc: 0.841
Before attack
╒═════════╤═══════════╕
│ Names │ Objects │
╞═════════╪═══════════╡
│ loss │ 0.625767 │
├─────────┼───────────┤
│ acc │ 0.841046 │
╘═════════╧═══════════╛
PGD training...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:01<00:00, 141.05it/s]
Bernoulli sampling...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 871.22it/s]
Evaluating...
1/1 [====================] - Total: 2.28ms - 2ms/step- loss: 0.983 - acc: 0.581
After evasion attack
╒═════════╤═══════════╕
│ Names │ Objects │
╞═════════╪═══════════╡
│ loss │ 0.983322 │
├─────────┼───────────┤
│ acc │ 0.581489 │
╘═════════╧═══════════╛
Can you please tell for which dataset you are getting this performance?
For the Cora dataset, I am getting the following performance:
Before attack ╒═════════╤═══════════╕ │ Names │ Objects │ ╞═════════╪═══════════╡ │ loss │ 0.615608 │ ├─────────┼───────────┤ │ acc │ 0.853119 │ ╘═════════╧═══════════╛
After evasion attack ╒═════════╤═══════════╕ │ Names │ Objects │ ╞═════════╪═══════════╡ │ loss │ 0.687647 │ ├─────────┼───────────┤ │ acc │ 0.794769 │ ╘═════════╧═══════════╛
The accuracy reduces by only 6% but in your case the reduction is \~26%.
It's Cora, with a perturbation rate 0.2.
Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the result
pgd_attack.py
andrandom_attack.py
underexamples/attack/untargeted
, but the accuracies of both evasion and poison attack seem not to decrease.I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?
Here are the results of
pgd_attack.py
Here are the results of
random_attack.py