muhanzhang / SEAL

SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction). "M. Zhang, Y. Chen, Link Prediction Based on Graph Neural Networks, NeurIPS 2018 spotlight".
611 stars 143 forks source link

about subgraph extraction #64

Open hl8086 opened 3 years ago

hl8086 commented 3 years ago

Hi, thanks for sharing your code, I have encountered some problems. if g.has_edge(0, 1): g.remove_edge(0, 1) Why do we need to remove the links between positive samples of the training set? When using supervised learning for link prediction, the links between positive samples of the training set are usually preserved. thanks

muhanzhang commented 3 years ago

If you don't remove the positive link, the GNN model has already known there is a positive link there thus makes trivial predictions. Think about when you test, there are not any links between the target nodes to predict. You have to predict the link existence from the context subgraph, instead of directly telling the model that there is a link.

11z000i commented 3 days ago

If you don't remove the positive link, the GNN model has already known there is a positive link there thus makes trivial predictions. Think about when you test, there are not any links between the target nodes to predict. You have to predict the link existence from the context subgraph, instead of directly telling the model that there is a link.

Hi, it seems that this paper(SEAL) propose a trick named negative injection, which temporally inject negative links into the graph. Isn't this in conflict with it, or I misunderstood? Thanks