divelab / GOOD

GOOD: A Graph Out-of-Distribution Benchmark [NeurIPS 2022 Datasets and Benchmarks]
https://good.readthedocs.io/
GNU General Public License v3.0
180 stars 19 forks source link

Leaderboard results of GOODTwitter #16

Closed Wuyxin closed 1 year ago

Wuyxin commented 1 year ago

Hi,

Thanks for the code and this is a great work.

I saw you have included a Twitter dataset besides the datasets in the paper, I am wondering if you happen to have the leaderboard results on this dataset as well?

CM-BF commented 1 year ago

Hi Yingxin,

Thank you for your question. Although we provide GOODTwitter in this repo, it hasn't been included in this work officially, so we don't have the leaderboard results right now. Sorry for this inconvenience.

Best, Shurui

Wuyxin commented 1 year ago

Thanks, that's fair!

Another quick question is that, for WebKB, since the graph only has 617 nodes, by setting batch size to 4096, does that mean the GraphSAINT sampler return the whole graph everytime (i.e., the same as the whole graph training)?

CM-BF commented 1 year ago

The walk-based GraphSAINT won't sample the whole graph, instead, the size of sampled graphs only depends on walk_length, i.e., the size of sampled graphs is not affected by the batch size.

Please let me know if any questions. :smile:

Wuyxin commented 1 year ago

I see! But why do you use the sampler while we can feed the whole graph into the GNN in this case? Wouldn't that be easier as the graph size is small?

CM-BF commented 1 year ago

We tried directly feeding the whole graph for small graph cases. However, the performance difference is trivial. Therefore, we simply apply this sampler to all datasets for consistency. :)

Best, Shurui

Wuyxin commented 1 year ago

Got it. Yeah that's also what I got - the performance doesn't seem to differ much. Thanks!