Closed Yihan-zz closed 4 years ago
Thanks for replying,I have another question. I saw the training speed of GraphSAGE on Reddit on your paper is 0.4260s/batch. But I ran the official code on GPU and the speed is about 0.1s/batch. Did you ran it on CPU or Did I miss anything ?
We compared all methods on CPU. On GPU it should have a similar trend. BTW, difference environments will cause different computing time. Even on GPUs, difference graphic cards will have huge speed difference.
thanks !
Hi, thanks for your great work! But I have two question 1. The paper said:"The nonbatched version of GCN runs out of memory on the large graph Reddit". How can gcn (batched) reduce memory usage? Don't we need to load the adjacency matrix in gcn(batched)?
GCN is a method for transductive. How does it work on Reddit as it is a inductive dataset in previous work.