zhao-tong / GAug

AAAI'21: Data Augmentation for Graph Neural Networks
MIT License
187 stars 34 forks source link

Issues on mini-batch training #5

Closed zzysh12345 closed 3 years ago

zzysh12345 commented 3 years ago

Hi, I am very amazed by your work. I've got some problems now on mini-batch training There are reports on pubmed and ogb-arxiv in your paper, but I cannot find corresponding codes in this repo to reproduce the results. What should I do if I want to run the model on these two datasets? Besides, I notice that in nc model, nodes are devided into mini batches to feed in the model, but in ep model, it still runs on the whole graph, and simply take corresponding part of it after that for every batch:Z = Z[nodes_batch]. Would it exceed the memory on large datasets such as ogb?

zzysh12345 commented 3 years ago

hello?

zhao-tong commented 3 years ago

Hi, sorry for the late reply, I didn't notice this issue previously. The code for the mini-batched GAugO can be used just like the normal one, so you can reproduce the results with the optuna scripts. For the second question, the space needed for Z is very small comparing with the ones used by adjacency matrix, so I didn't batch that part. I tested it on ogb-arxiv and it worked fine, but having the ep model also batched is a good idea.

zzysh12345 commented 3 years ago

Got it. I will have a try. Thanks for your reply.