EdisonLeeeee / GreatX

A graph reliability toolbox based on PyTorch and PyTorch Geometric (PyG).
MIT License
83 stars 11 forks source link

Benchmark Results of Attack Performance #3

Closed ziqi-zhang closed 1 year ago

ziqi-zhang commented 1 year ago

Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the resultpgd_attack.py and random_attack.py under examples/attack/untargeted, but the accuracies of both evasion and poison attack seem not to decrease.

I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?

Here are the results of pgd_attack.py

Processing...
Done!
Training...
100/100 [==============================] - Total: 874.37ms - 8ms/step- loss: 0.0524 - acc: 0.996 - val_loss: 0.625 - val_acc: 0.815
Evaluating...
1/1 [==============================] - Total: 1.82ms - 1ms/step- loss: 0.597 - acc: 0.843
Before attack
 Objects in BunchDict:
╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.59718  │
├─────────┼───────────┤
│ acc     │  0.842555 │
╘═════════╧═══════════╛
PGD training...: 100%|███████████████████████████████████████████████████████████████████████| 200/200 [00:02<00:00, 69.74it/s]
Bernoulli sampling...: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 804.86it/s]
Evaluating...
1/1 [==============================] - Total: 2.11ms - 2ms/step- loss: 0.603 - acc: 0.842
After evasion attack
 Objects in BunchDict:
╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.603293 │
├─────────┼───────────┤
│ acc     │  0.842052 │
╘═════════╧═══════════╛
Training...
100/100 [==============================] - Total: 535.83ms - 5ms/step- loss: 0.124 - acc: 0.976 - val_loss: 0.728 - val_acc: 0.779
Evaluating...
1/1 [==============================] - Total: 1.74ms - 1ms/step- loss: 0.766 - acc: 0.827
After poisoning attack
 Objects in BunchDict:
╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.76604  │
├─────────┼───────────┤
│ acc     │  0.826962 │
╘═════════╧═══════════╛

Here are the results of random_attack.py

Training...
100/100 [==============================] - Total: 600.92ms - 6ms/step- loss: 0.0615 - acc: 0.984 - val_loss: 0.626 - val_acc: 0.811
Evaluating...
1/1 [==============================] - Total: 1.93ms - 1ms/step- loss: 0.564 - acc: 0.832
Before attack
 Objects in BunchDict:
╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.564449 │
├─────────┼───────────┤
│ acc     │  0.832495 │
╘═════════╧═══════════╛
Peturbing graph...: 253it [00:00, 4588.44it/s]
Evaluating...
1/1 [==============================] - Total: 2.14ms - 2ms/step- loss: 0.585 - acc: 0.826
After evasion attack
 Objects in BunchDict:
╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.584646 │
├─────────┼───────────┤
│ acc     │  0.826459 │
╘═════════╧═══════════╛
Training...
100/100 [==============================] - Total: 530.04ms - 5ms/step- loss: 0.0767 - acc: 0.98 - val_loss: 0.574 - val_acc: 0.791
Evaluating...
1/1 [==============================] - Total: 1.77ms - 1ms/step- loss: 0.695 - acc: 0.813
After poisoning attack
 Objects in BunchDict:
╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.695349 │
├─────────┼───────────┤
│ acc     │  0.81338  │
╘═════════╧═══════════╛
EdisonLeeeee commented 1 year ago

I've updated the PGD attack and you can try it again :)

ziqi-zhang commented 1 year ago

I tried the new PGD attack and the result is promising!

Training...
100/100 [====================] - Total: 534.18ms - 5ms/step- loss: 0.0835 - acc: 0.984 - val_loss: 0.586 - val_acc: 0.815
Test node clean performance

Evaluating...
1/1 [====================] - Total: 1.79ms - 1ms/step- loss: 0.626 - acc: 0.841
Before attack
 ╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.625767 │
├─────────┼───────────┤
│ acc     │  0.841046 │
╘═════════╧═══════════╛
PGD training...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:01<00:00, 141.05it/s]
Bernoulli sampling...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 871.22it/s]
Evaluating...
1/1 [====================] - Total: 2.28ms - 2ms/step- loss: 0.983 - acc: 0.581
After evasion attack
 ╒═════════╤═══════════╕
│ Names   │   Objects │
╞═════════╪═══════════╡
│ loss    │  0.983322 │
├─────────┼───────────┤
│ acc     │  0.581489 │
╘═════════╧═══════════╛
SubhajitDuttaChowdhury commented 1 year ago

Can you please tell for which dataset you are getting this performance?

For the Cora dataset, I am getting the following performance:

Before attack ╒═════════╤═══════════╕ │ Names │ Objects │ ╞═════════╪═══════════╡ │ loss │ 0.615608 │ ├─────────┼───────────┤ │ acc │ 0.853119 │ ╘═════════╧═══════════╛

After evasion attack ╒═════════╤═══════════╕ │ Names │ Objects │ ╞═════════╪═══════════╡ │ loss │ 0.687647 │ ├─────────┼───────────┤ │ acc │ 0.794769 │ ╘═════════╧═══════════╛

The accuracy reduces by only 6% but in your case the reduction is \~26%.

EdisonLeeeee commented 1 year ago

It's Cora, with a perturbation rate 0.2.