Open hl8086 opened 3 years ago
If you don't remove the positive link, the GNN model has already known there is a positive link there thus makes trivial predictions. Think about when you test, there are not any links between the target nodes to predict. You have to predict the link existence from the context subgraph, instead of directly telling the model that there is a link.
If you don't remove the positive link, the GNN model has already known there is a positive link there thus makes trivial predictions. Think about when you test, there are not any links between the target nodes to predict. You have to predict the link existence from the context subgraph, instead of directly telling the model that there is a link.
Hi, it seems that this paper(SEAL) propose a trick named negative injection, which temporally inject negative links into the graph. Isn't this in conflict with it, or I misunderstood? Thanks
Hi, thanks for sharing your code, I have encountered some problems.
if g.has_edge(0, 1): g.remove_edge(0, 1)
Why do we need to remove the links between positive samples of the training set? When using supervised learning for link prediction, the links between positive samples of the training set are usually preserved. thanks