plai-group / gae_in_pytorch

Graph Auto-Encoder in PyTorch
Apache License 2.0
81 stars 18 forks source link

a question about class GraphConvolution in layers.py #4

Open AllenWu18 opened 5 years ago

AllenWu18 commented 5 years ago

Hi, may I ask you a simple question? I think in the original paper "Semi-Supervised Classification With Graph Convolutional NetworkS" by Thomas N.Kipf and Max Welling in ICLR 2017, the authors wrote that the GCN block is D^(-1/2)AD^(-1/2)XW, which D and A have added the self-loop. And I just don't know where the operation [D^(-1/2)AD^(-1/2)] is in the code?? Is it in the function reset_parameter?? Thx!

vmasrani commented 5 years ago

Yup, it's here: https://github.com/vmasrani/gae_in_pytorch/blob/2639fbccb19cc1cfa17407dd36fac4917401a903/preprocessing.py#L20

AllenWu18 commented 5 years ago

OK, I got it already :) and I have other two questions here: 1) if we use the GAE to get the embeddings of each nodes in the graph, should we use the activation function like ReLU or not in " adj_hat = torch.mm(x,x.t()), which x is the encoder from GCN? 2) I read other codes, in reset_parameters of layers.py, some others did like following: def resetparameters(self): stdv = 1. / math.sqrt(self.weight.size(1)) self.weight.data.uniform(-stdv, stdv) if self.bias is not None: self.bias.data.uniform_(-stdv, stdv) (from https://github.com/sbonner0/gae_in_pytorch/blob/master/layers.py) or
def reset_parameters(self): torch.nn.init.xavieruniform(self.weight) (from https://github.com/zfjsail/gae-pytorch/blob/master/gae/layers.py) And do they have the similar effects like yours?? This just made me confused for a long time :( Thank you very much !