Open Yash685 opened 1 month ago
Were you running on one of our examples? https://github.com/dmlc/dgl/tree/master/examples/pytorch/ogb/ogbn-arxiv
Thank you for the response Since my analysis is based GraphSAGE model , I reused the code present here https://github.com/dmlc/dgl/blob/1.1.x/examples/pytorch/graphsage/node_classification.py and passed dataset as ogbn-arxiv
They may have different hyperparameters. Could you try the OGBN-Arxiv example and see if there is any problem?
@BarclayII We added reverse edges to the graph and the accuracy has improved. Why is adding reverse edge not an option enabled by default for ogbn-arxiv and papers?
We should not change the original dataset in default. Adding reverse edges is just user's option instead of dataset.
@Rhett-Ying Thanks for reverting back. Can you suggest us on how to achieve the targeted leaderboard accuracy in the original dataset? We have tried using default and various other hyper-params (aka batch_size, fanouts, dropouts, learning rate) on Graphsage training, without any improvements in accuracy.
❓ Questions and Help
Hello Team, I am conducting experiments on DGL with the ogbn-arxiv dataset. However, my test accuracy is 0.5547, which is significantly lower than the results reported on the OGB leaderboard. Could you please provide guidance on achieving better performance?
DGL 1.2
Experiment details:
Hardware details:
Code used to measure accuracy arxiv_accuracy.txt
Output Snapshot