snap-stanford / ogb

Benchmark datasets, data loaders, and evaluators for graph machine learning
https://ogb.stanford.edu
MIT License
1.89k stars 397 forks source link

Vessel #356

Closed jqmcginnis closed 1 year ago

jqmcginnis commented 1 year ago

Hi OGB Team,

first off thank you very much for making this happen, we are beyond stoked to see VesselGraph in the offical OGB package!

Before we finally merge the changes into master, I would kindly ask you to (critically) review the reference implementations and double check if I use the evaluator and package-specific functions in the intended way.

Moreover, I think we should discuss two final steps:

(1) Who will compute the final scores for the website? How is this generally done, do you/we need to use a specific graphic card? We would be able to obtain the results on Quadro RTX 8000 cards. (2) We did our hyperparameter search on a sample region of the graph, which we think should be sufficient for this use case. How was this done for the other OGB graphs, is there some official procedure? (3) I decided against including SEAL in the examples, as it is also not provided for the other graphs (as written in the paper). My feeling is that this should be one of the first benchmarks of the Leaderboard, and I will look into this topic in the official SEAL repository.

Once again, thank you very much and happy to answer any questions you might have!

Cheers,

Julian

weihua916 commented 1 year ago

Hi! Thank you so much for your contribution! The code overall looks great!

Re (1) (2), we would like you to own/maintain your example code in OGB repository. As such, please run experiments and do hyper-parameter tuning yourself. Then set the best hyper-parameter as the default hyper-parameter in argparse. In README.md, you may record the model performance (see here). Let me know once this is done!

Re (3), sounds good. Besides, once this branch is merged into master branch, we will ask you to make leaderboard submissions. These baselines will be on the leaderboard! Then, you are free to have your own SEAL repo and further add it into our leaderboard.

jqmcginnis commented 1 year ago

@weihua916 thank you very much for the feedback! Glad to hear that :slightly_smiling_face:

Regarding (1) (2), the used hyperparameters (as employed in the NeurIPS) are already set as default parameters in the argparse section. With the newest commit, I added the scores in the Readme.md similar to the example your proposed.

Feel free to push into master once you have verified that all things have been addressed :slightly_smiling_face:

weihua916 commented 1 year ago

Thank you for your contribution! Looks great to me!