Closed yurivict closed 1 year ago
This is the paper's design. See Eq.3 in the paper.
I see , thank you.
Also the example prints accuracy_mean, 0.8985
before training, and accuracy_mean, 0.8193
after training. Does this mean that training didn't succeed, since accuracy didn't improve much?
Also the loss function is negative, which is strange.
Also this example doesn't have any test set to evaluate accuracy.
This can happen in self-supervised node/graph representation learning tasks, as there are some studies claiming that an untrained GNN can already perform quite well. If the self-supervised loss function is not consistent with the downstream tasks, pertaining can harm the performance.
There is train/test split in: https://github.com/dmlc/dgl/blob/master/examples/pytorch/mvgrl/graph/utils.py#L12-L26
The loss can be negative as it is a difference between positive loss and negative loss. https://github.com/dmlc/dgl/blob/master/examples/pytorch/mvgrl/graph/utils.py#L63-L83
If the positives' scores are lower than negatives' scores, it will become negative.
This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you
Hi @yurivict , I am closing this issue assuming you are happy about our response. Feel free to follow up and reopen the issue if you have more questions with regard to our response.
This example computes something called
Personalized Page Ranking
in the procedure compute_ppr.What is
Personalized Page Ranking
of a graph and why is it needed in order to train a neural network on graphs?I think this testcase needs a README explaining why this is done.
@hengruizhang98 Maybe you know what does
compute_ppr
compute and why is it needed to train a neural network on graphs?