Open hkthirano opened 6 years ago
The data splits are different, the normalization of the adjacency matrix is slightly different and there is no dropout on the first layer.
On Tue 25. Sep 2018 at 10:02 hokuto_HIRANO notifications@github.com wrote:
As I trained, the result of this repository is more accurate than the original paper (SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS) in cora dateset.
What is different from the original code?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tkipf/pygcn/issues/20, or mute the thread https://github.com/notifications/unsubscribe-auth/AHAcYEuO_TElTlJUjoBe3gc4rS13e0dtks5uefE3gaJpZM4W4Kyk .
I have been trying to understand what you mean by 'there is no dropout on the first layer' and 'sparse dropout'.
def forward(self, x, adj):
x = F.relu(self.gc1(x, adj))
---> x = F.dropout(x, self.dropout, training=self.training)
x = self.gc2(x, adj)
return F.log_softmax(x, dim=1)
^ I am assuming this to be the dropout on first layer. Please let me know what I am missing.
Thanks!
Yes, this is correct- my wording was a bit ambiguous: the original TensorFlow-based implementation also uses Dropout on the input features directly (which is what I meant by first layer).
On Sun, Feb 3, 2019 at 12:34 AM Komal Kumar notifications@github.com wrote:
I have been trying to understand what you mean by 'there is no dropout on the first layer' and 'sparse dropout'.
def forward(self, x, adj): x = F.relu(self.gc1(x, adj)) ---> x = F.dropout(x, self.dropout, training=self.training) x = self.gc2(x, adj) return F.log_softmax(x, dim=1)
^ I am assuming this to be the dropout on first layer. Please let me know what I am missing.
Thanks!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tkipf/pygcn/issues/20#issuecomment-460008872, or mute the thread https://github.com/notifications/unsubscribe-auth/AHAcYDxYVubyD_N1o8CNAPEAtZN1hJOzks5vJiCggaJpZM4W4Kyk .
I see. Thanks for the clarification!
The data splits are different, the normalization of the adjacency matrix is slightly different and there is no dropout on the first layer. … On Tue 25. Sep 2018 at 10:02 hokuto_HIRANO @.***> wrote: As I trained, the result of this repository is more accurate than the original paper (SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS) in cora dateset. What is different from the original code? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#20>, or mute the thread https://github.com/notifications/unsubscribe-auth/AHAcYEuO_TElTlJUjoBe3gc4rS13e0dtks5uefE3gaJpZM4W4Kyk .
Hi, this is some very clean code. Good job. I compared the accuracy on Cora with this repo and GCN sample codes from PyG and DGL. Surprisingly, result from this one is about 2 to 3 points better (0.83 v.s. 0.81). Is it because of the slightly different adjacency matrix? Thanks a lot.
As I trained, the result of this repository is more accurate than the original paper (SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS) in cora dateset.
What is different from the original code?