Closed xtdhr closed 1 year ago
Hi there, L_sur is applied to nodes in both classes, while L_cla is applied only to the positive nodes to make it invariant. In other words, in Eq.10, the condition y_v=1 is only for y_cla, we agree that it is better to add another sum symbol to the second term.
And for independence, we have thought about this property. But unfortunately, in this paper, we didn't meet this condition. So we term our method as "separation" instead of "disentanglement". We hope we can improve it in the future work and welcome to collaborate if you have some ideas on it.
Thanks for your reply.
When I run your code I have another question.
I noticed that the Loss_constraint is only used to update the feature of nodes, but how does it influence the GNN's parameters? After all, when test, we get the embeddings of nodes in test dataset only use GNN without Loss_constraint.
I would really appreciate it if you could answer it.
Thanks for the question. Our method is under a semi-supervised transductive setting, meaning all nodes are seen in the training stage, with labels of test nodes unknown. Hence the embedding of the test nodes will be learned and updated during training.
Hi, recently i read your paper, and thought it was a great idea.
I noticed that Eq. 10 in the paper only added L_constraint to positive nodes. I am wondering why not add it to the negtive nodes, accroding to Section 3.2, L_sur is more useful for the negtive nodes. And how to guarantee the independence between C and S?
Thank you very much,