DSE-MSU / DeepRobust

A pytorch adversarial library for attack and defense methods on images and graphs
MIT License
995 stars 192 forks source link

AttributeError: 'numpy.ndarray' object has no attribute 'tolil' #85

Open PolarisRisingWar opened 3 years ago

PolarisRisingWar commented 3 years ago

This is my code:

from ogb.nodeproppred import PygNodePropPredDataset
from deeprobust.graph.data import Pyg2Dpr
from deeprobust.graph.defense import GCN
from deeprobust.graph.targeted_attack import Nettack
pyg_data = PygNodePropPredDataset(name = 'ogbn-arxiv')
print(pyg_data)
data = Pyg2Dpr(pyg_data) # convert pyg to dpr
print(data)

adj, features, labels = data.adj, data.features, data.labels
idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test
# Setup Surrogate model
surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1,
                nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu')
surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30)
# Setup Attack Model
target_node = 0
model = Nettack(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=True, device='cpu').to('cpu')
# Attack
model.attack(features, adj, labels, target_node, n_perturbations=5)
modified_adj = model.modified_adj # scipy sparse matrix
modified_features = model.modified_features # scipy sparse matrix

And this is the error message:

Traceback (most recent call last):
  File "/home/wanghuijuan/whj_code2/aisafety/try2.py", line 20, in <module>
    model.attack(features, adj, labels, target_node, n_perturbations=5)
  File "/home/wanghuijuan/anaconda3/envs/cuda102/lib/python3.9/site-packages/deeprobust/graph/targeted_attack/nettack.py", line 150, in attack
    self.ori_features = features.tolil()
AttributeError: 'numpy.ndarray' object has no attribute 'tolil'

I wonder how could I handle with this problem?

It's hard for me to directly download the example data of DeepRobust coz in China mainland it's hard to download directly from https://raw.githubusercontent.com domain name. So I wanna use PyG dataset and transfer it to DeepRobust dataset. But it causes this problem. My DeepRobust packagel was downloaded by pip, cause when downloading by git it occurred some strange bugs. My Python verison is 3.9.7, on Unbuntu cloud server. This is part of my pip list which I think may be relative to this problem: deeprobust 0.2.2 gensim 3.8.3 networkx 2.6.3 numba 0.54.1 numpy 1.20.3 ogb 1.3.2 pandas 1.3.4 pip 21.2.4 scikit-learn 1.0.1 scipy 1.7.1 tensorboardX 2.4 torch 1.10.0 torch-geometric 2.0.1 torch-scatter 2.0.9 torch-sparse 0.6.12

ChandlerBang commented 3 years ago

Hi,

Nettack can only deal with binary and sparse features/structure perturbation but in ogbn-arxiv the features are continuous and dense. That is why it fails. You may want to set attack_features=False to avoid the problem (although you may still want to attack the features).

Another solution is to use SGAttack instead of Nettack. See test_sga.py for more details. Now it does not support attack_features=True but @EdisonLeeeee is working on it and will finish it in several days.

PolarisRisingWar commented 3 years ago

Thank you for your answer! I manully changed feature matrix to sparse matrix, and it can run, but it's too slow that I don't even know if I have other problems. So I changed dataset to cora and it worked. Honestly I have a homework to do, so it seems that I need to use Nettack. So I wonder how could I accelerate my training? For example, graph too big to attack just use attack_features=False? Are there any ways for me to decide how big a graph is too big (For example I need to train for more than 30 min?) ? Besides, can I use other cuda device rather than only using cuda:0? I met the problem that can't convert to cuda:1 and this is my issue about that problem. I want to use several scripts to run codes so maybe some dataset can get results more quickly. But I met this problem: #86

PolarisRisingWar commented 3 years ago

And I wonder if the feature is not binary and sparse, is convert it to sparse matrix (maybe not binary) useful? Or I have to use binary and sparse matrix?