Closed Dorbmon closed 1 year ago
I think torch.sort
will crash here since it OOMs. Not sure how we can resolve this :(
@rusty1s 🤔️memory is still quite enough, why OOM?
How much memory does your machine have? Storing the edge index alone takes about 40GB, and then there is additional memory needed to sort this, which should bring this to 100-150GB peak.
I have 256G memory and it uses about 170G when it crashed....
What happens if you run:
idx = (121751666 * edge_index[0]).add_(edge_index[1])
perm = torch.argsort(idx)
edge_index = edge_index[:, perm]
without the usage of torch-sparse
. Does it still crash?
It works fine.
It doesn't crash on Python 3.10. I used to run it on Python 3.9. It seems a python memory allocation bug?
That's strange, but nice find. I am not yet sure how to fix this, so if updating to Python 3.10 works for you, we can close this issue.
Thanks.
When I called:
SparseTensor.from_edge_index(edge_index, edge_attr, (N, N), is_sorted=False)
, It crashed:Here are the argument sizes:
When I reduce the size of edge_index, it works. However, my memory is quit enough. When crash happened, there was still 80G memory available. Anyway to solve this? Thanks!