Closed bits-glitch closed 2 years ago
Generating negative training edges is expected to be performed inside the model via negative sampling. This models the real case scenario in which we are only given positive relations, and want to infer missing ones.
Thank you very much for the explanation!
Hello OGB Team,
I have looked at several examples from the LinkPropPred Datasets and have a question concerning the train/test/val split.
Let's take an example:
If I look at "split", I can examine 5 dictionaries:
I do understand that we need negative edge samples to test the model prediction on negative/non-existing edges as well, but why don't you include negative train edges, i.e. split['train']['edge_neg']? Does this mean that we are not training on negative edges, but only measure the accuracy on negative test and validation links?