Traceback (most recent call last):
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/node_classifier.py", line 203, in
graphsage.train_model()
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/node_classifier.py", line 80, in train_model
out = self.model(self.data.x[n_id], adjs, self.edge_weight)
File "/home/gfq/anaconda3/envs/unlearning/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, *kwargs)
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/gcn/gcn_net_batch.py", line 20, in forward
x = self.convs[i]((x, x_target), edge_index, edge_weight=edge_weight[e_id])
File "/home/gfq/anaconda3/envs/unlearning/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(input, **kwargs)
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/gcn/gcn_conv_batch.py", line 22, in forward
out = torch.matmul(out, self.weight) #报错拉!
File "/home/gfq/anaconda3/envs/unlearning/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GCNConvBatch' object has no attribute 'weight'
进程已结束,退出代码为 1
If we change out = torch.matmul(out, self.weight) to out = self.lin(out) in Graph-Unlearning-main/lib_gnn_model/gcn/gcn_net_batch.py, it works, but by default citeseer training set set the accuracy is only about 40%.
What experiments did you run? I just ran the partition experiment, but I had some problems with the node_edge_unlearning experiment. Do you know if you have encountered, or have solved?
Traceback (most recent call last): File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/node_classifier.py", line 203, in
graphsage.train_model()
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/node_classifier.py", line 80, in train_model
out = self.model(self.data.x[n_id], adjs, self.edge_weight)
File "/home/gfq/anaconda3/envs/unlearning/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, *kwargs)
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/gcn/gcn_net_batch.py", line 20, in forward
x = self.convs[i]((x, x_target), edge_index, edge_weight=edge_weight[e_id])
File "/home/gfq/anaconda3/envs/unlearning/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(input, **kwargs)
File "/home/gfq/code/privacy/Graph-Unlearning-main/lib_gnn_model/gcn/gcn_conv_batch.py", line 22, in forward
out = torch.matmul(out, self.weight) #报错拉!
File "/home/gfq/anaconda3/envs/unlearning/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GCNConvBatch' object has no attribute 'weight'
进程已结束,退出代码为 1
If we change out = torch.matmul(out, self.weight) to out = self.lin(out) in Graph-Unlearning-main/lib_gnn_model/gcn/gcn_net_batch.py, it works, but by default citeseer training set set the accuracy is only about 40%.