I'm trying to convert to a dense matrix but going via network x is too memory intensive with the real file. I was hoping that I could do it via the csr representations rather than via networkx as they are more efficient. Oddly this is occuring even with very small files (example attached)
import pandas as pd
from sklearn.neighbors import radius_neighbors_graph
from torch_geometric.utils.convert import from_scipy_sparse_matrix
from torch_geometric.utils import to_dense_adj
df = pd.read_csv("example.csv")
A = radius_neighbors_graph(df.values, 1, mode='connectivity',include_self=False)
g = from_scipy_sparse_matrix(A)
g = to_dense_adj(g)
This generates the following error
Traceback (most recent call last):
File "d:\[edited for privacy]\gnn_precaculated_inputs.py", line 30, in <module>
g = to_dense_adj(g)
^^^^^^^^^^^^^^^
File "D:\[env path]\Lib\site-packages\torch_geometric\utils\_to_dense_adj.py", line 64, in
to_dense_adj
max_index = int(edge_index.max()) + 1 if edge_index.numel() > 0 else 0
^^^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'numel'
Collecting environment information...
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:42:21) [MSC v.1916 64 bit
(AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000 Laptop GPU
Nvidia driver version: 538.27
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
🐛 Describe the bug
I'm trying to convert to a dense matrix but going via network x is too memory intensive with the real file. I was hoping that I could do it via the csr representations rather than via networkx as they are more efficient. Oddly this is occuring even with very small files (example attached)
This generates the following error
example.csv
Thanks.
Versions
Collecting environment information... PyTorch version: 2.2.2 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A
Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:42:21) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19045-SP0 Is CUDA available: True CUDA runtime version: 11.6.124 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX A5000 Laptop GPU Nvidia driver version: 538.27 cuDNN version: Could not collect
HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture=9 CurrentClockSpeed=2611 DeviceID=CPU0 Family=179 L2CacheSize=10240 L2CacheSpeed= Manufacturer=GenuineIntel MaxClockSpeed=2611 Name=Intel(R) Xeon(R) W-11955M CPU @ 2.60GHz ProcessorType=3 Revision=
Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.2.2 [pip3] torch_geometric==2.5.2 [pip3] torch_scatter==2.1.2+pt22cu121 [pip3] torch_sparse==0.6.18+pt22cu121 [pip3] torchaudio==2.2.2 [pip3] torchvision==0.17.2 [conda] blas 1.0 mkl [conda] mkl 2023.1.0 h6b88ed4_46358 [conda] mkl-service 2.4.0 py312h2bbff1b_1 [conda] mkl_fft 1.3.8 py312h2bbff1b_0 [conda] mkl_random 1.2.4 py312h59b6b97_0 [conda] numpy 1.26.4 py312hfd52020_0 [conda] numpy-base 1.26.4 py312h4dde369_0 [conda] pyg 2.5.2 py312_torch_2.2.0_cu121 pyg [conda] pytorch 2.2.2 py3.12_cuda12.1_cudnn8_0 pytorch [conda] pytorch-cuda 12.1 hde6ce7c_5 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torch-scatter 2.1.2+pt22cu121 pypi_0 pypi [conda] torch-sparse 0.6.18+pt22cu121 pypi_0 pypi [conda] torchaudio 2.2.2 pypi_0 pypi [conda] torchvision 0.17.2 pypi_0 pypi