Closed nghiahhnguyen closed 4 years ago
In case someone else needs the same functionality, this can be done by stacking multiple graphs into one giant graph, perform computation on this giant graph before decomposing them for further evaluation. You can refer to the Advanced Mini-batching session of PyTorch Geometric for further details.
For the moment, I see that the code allows one to parallelize the computation of node embedding for all nodes in a graph. However, if I want to parallelize the computation across multiple graphs (perhaps by passing multiple graphs per batch), there is an error during the computation for hyperbolic layer because torch.spmm does not support 3D Tensor.
File "~/hgcn/layers/hyp_layers.py", line 133, in forward support_t = torch.spmm(adj, x_tangent) RuntimeError: 2D tensors expected, got 3D, 3D tensors at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:260
Is there any way for me to perform graph parallelization?