seongjunyun / Graph_Transformer_Networks

Graph Transformer Networks (Authors' PyTorch implementation for the NeurIPS 19 paper)
960 stars 179 forks source link

How much GPU memory is required? #2

Closed kepsail closed 4 years ago

kepsail commented 4 years ago

My GPU has 10.92 GiB total memory. I run main_sparse.py on ACM data. It raised RuntimeError: CUDA out of memory

iceshzc commented 4 years ago

I get the same issue, and I run the source code on CPU with 32GB, there's no other errors, just a slow speed.

seongjunyun commented 4 years ago

Hi all, We also ran our code on CPU when datasets were DBLP, ACM. So we recommended you to run main.py on DBLP and ACM data. Thank you!