I'm using tf_geometric in a distributed fashion for a node classification problem. My BatchGraph contains thousands of Graph's and spans over several hundreds of gigabytes.
When using the GCN architecture, all my nodes run out of memory, despite the fact that I use "experimental_distribute_datasets_from_function" as provided in the sample codes. Isn't it supposed to distribute the dataset over all my nodes? I see a new parameter "num_or_size_splits" added to the API. Is this anything that could help me? How does it differentiate from the distributed setting?
Hi,
I'm using tf_geometric in a distributed fashion for a node classification problem. My BatchGraph contains thousands of Graph's and spans over several hundreds of gigabytes. When using the GCN architecture, all my nodes run out of memory, despite the fact that I use "experimental_distribute_datasets_from_function" as provided in the sample codes. Isn't it supposed to distribute the dataset over all my nodes? I see a new parameter "num_or_size_splits" added to the API. Is this anything that could help me? How does it differentiate from the distributed setting?
Thank you,