snap-stanford / ogb

Benchmark datasets, data loaders, and evaluators for graph machine learning
https://ogb.stanford.edu
MIT License
1.89k stars 398 forks source link

Has anyone reproduced the results of ogb_lsc_MAG240M baseline ? #176

Closed slacklife closed 3 years ago

slacklife commented 3 years ago

Has anyone reproduced the results of this experiment? I use the same dataset and the same code with this It is reported the validate accuracy is 0.701. But I got it that validate accuracy 0.48. Is there anything I should do with the original code?

Initializing data loader... Loading model... 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 8685/8685 [20:21<00:00, 7.11it/s] 0%| | 0/9177 [00:00<?, ?it/s]Validation accuracy: 0.4814500284276965 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 9177/9177 [18:57<00:00, 8.07it/s]

rusty1s commented 3 years ago

I guess that your full feature matrix is corrupted to some extend. You can alternatively download it from here to see if that fixes your issue.

Since you are using the DGL version, it might be a good idea to directly ask for help there - this example is directly maintained by the DGL team.

slacklife commented 3 years ago

Hi rusty1s The md5 of full_feature.npy I used is d98ffac92986de2fdaabc3fe44ced36c. I download the file from your link, and the same md5. So the full feature matrix is not corrupted.

Is there any other problem?

weihua916 commented 3 years ago

Could you please ask this to DGL Team, as they are maintaining the code?

rusty1s commented 3 years ago

Are you sure the model is done with training? It looks like 0.48 accuracy is obtained after the first epoch.