Closed eliasyin closed 3 years ago
We use a simplified version while original paper uses a normalized version.
But I think it's over simplified. According to the paper section 3.1 EXAMPLE
,
We first calculate in a pre-processing step. Our forward model then takes the simple form:
Therefore, I think there should be a step that calculate \hat{A}. Without this step, I think the code of GraphConvolution
is not Graph Convolution
It has been shown in http://tkipf.github.io/graph-convolutional-networks/ that this simplified version is already powerful. Quote: ``Despite its simplicity this model is already quite powerful''. And you can additionally implement the normalized version to see if the postulate is true.
Thank you. I see.
https://github.com/songyouwei/ABSA-PyTorch/blob/9acab7e62e8aa52a8eb0b4a560d39740bc0f3798/models/asgcn.py#L12-L33 This part of code is commented with
Simple GCN layer, similar to https://arxiv.org/abs/1609.02907
However, I cannot find the relation between theGraphConvolution
with the mentioned paper SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS. Because the forward propagation formula in the mentioned paper is $$H^{l+1}=\sigma(\tilde{D}^{-\frac{1}{2}}\tildee{A}\tilde{D}^{-\frac{1}{2}})$$, while the code here just apply simple matrix multiply