Closed zhangyikaii closed 4 years ago
Hi ZhangYikaii,
Sorry for the late reply.
It depends on what model you are using now. Some models require the input graph to be dense graph. If you are using Metattack/PGD/MinMax/ProGNN, changing it to sparse=True
would throw an error.
Hi ChandlerBang, Thank you very much for your reply.
ProGNN seems to use Metattack when generating Attack samples generate_attack.py. Actually I have tried to change the Metattack source code in many places when I use sparse tensor. For example, when adj is sparse, the type conversion of this code is repeated: https://github.com/DSE-MSU/DeepRobust/blob/master/deeprobust/graph/global_attack/mettack.py#L256
Is there any other way to deal with large matrices?
Anyway, thank you very much for the method proposed in your paper!
Actually ProGNN directly loads the pre-attacked graph in link without calling generate_attack.py
.
The operation that requires dense input graph is torch.svd()
and the reconstructed graph from SVD is a also dense matrix. Although you may use scipy.sparse.linalg.svds
to do a sparse SVD, to reproduce the reported results in the paper, I would suggest you to still use dense graph and try to find a machine with larger RAM and large GPU memory (12GB).
Thank you very much!
不好意思打搅了, 我在使用较大的图, 调用
deeprobust.graph.utils.preprocess
时, 出现了内存超限的错误, 在调用时将sparse
参数置为True会影响后面的训练吗, 谢谢!Excuse me, I used a larger graph and called the function
deeprobust.graph.utils.preprocess
, which caused https://github.com/DSE-MSU/DeepRobust/blob/master/deeprobust/graph/utils.py#L32 to throw the Memory Error.If I set
sparse
toTrue
:adj, features, labels = preprocess(adj, features, labels, sparse=True)
, does it affect the training behind? Thank you!