Closed aurelio-amerio closed 2 years ago
The heart of the graph convolutional layers is using tf.sparse.sparse_dense_matmul(...)
to multiply the Laplacian with the input. I believe that this operation is currently not supported on TPUs.
The graph convolution indeed requires the multiplication of the data matrix with a sparse matrix (a discrete Laplacian operator) that represents the spherical structure.
Is there support for sparse operations on TPUs? The code might be updated to use something else than tf.sparse.sparse_dense_matmul
. Alternatively, if memory is enough, the sparse matrix could be transformed to a dense matrix. Then one could use the standard matmul
.
Another option would be to adapt the code to use a graph NN library supported by TPUs.
Hello, I was wondering if it's possible to use this library with a TPU, or if there are some limitations that prevent one to do it.
Thank you very much for your help!