Open EliHei2 opened 1 month ago
Thanks for reporting. Do you have a minimal example that operates from a torch_geometric.dataset
provided by PyG? Does this only occur on heterogeneous graphs? You can probably get around this by passing the batch_size
to the logger commands.
Hey @rusty1s, Thanks for your quick repsonse. Adding batch_size
indeed solves the problem (see the following code). I overlooked the traceback, was kinda obvious š
.
...
self.log("validation_loss", loss, on_step=False, on_epoch=True, batch_size=32)
...
But I'm still wondering when the casting to FeatureStore
happens (as I never used it explicitly). I don't know how to share the dataset but I this is one example HeteroData
from the dataset:
HeteroData(
tx={
pos=[443, 3],
x=[443, 280],
},
nc={ x=[2, 4] },
(tx, belongs, nc)={
edge_index=[2, 128],
edge_label=[256],
edge_label_index=[2, 256],
},
(tx, neighbors, tx)={ edge_index=[2, 6475] }
)
Also never tried with homogenous Data
before, so no idea if the error appears also there.
But I'm still wondering when the casting to FeatureStore happens (as I never used it explicitly)
PL uses some internal logic to infer the batch size via iterating over attributes. Since Data
inherits from FeatureStore
, it does not cast, but just trying to access a method from FeatureStore
which isn't available at the Data
-level.
š Describe the bug
Hello esteemed pyg developers,
Trying to train the following simple model:
Training without using the validation dataloader works fine but when adding the validation dataloader I get the following error.
To my understanding the lighting treats the batches from
DataLoader
asFeatureStore
, but couldn't dig deeper what's exactly happening. Worth to mention that my the graph is inHeteroData
. Would very much appreciate it if you could give me an idea what's happening there.Versions
PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64) GCC version: (GCC) 11.1.0 Clang version: Could not collect CMake version: version 3.26.3 Libc version: glibc-2.17
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-3.10.0-1160.114.2.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla V100-SXM3-32GB GPU 1: Tesla V100-SXM3-32GB GPU 2: Tesla V100-SXM3-32GB GPU 3: Tesla V100-SXM3-32GB GPU 4: Tesla V100-SXM3-32GB GPU 5: Tesla V100-SXM3-32GB GPU 6: Tesla V100-SXM3-32GB GPU 7: Tesla V100-SXM3-32GB GPU 8: Tesla V100-SXM3-32GB GPU 9: Tesla V100-SXM3-32GB GPU 10: Tesla V100-SXM3-32GB GPU 11: Tesla V100-SXM3-32GB GPU 12: Tesla V100-SXM3-32GB GPU 13: Tesla V100-SXM3-32GB GPU 14: Tesla V100-SXM3-32GB GPU 15: Tesla V100-SXM3-32GB
Nvidia driver version: 550.54.15 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz Stepping: 4 CPU MHz: 3083.807 CPU max MHz: 3700.0000 CPU min MHz: 1200.0000 BogoMIPS: 5400.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 33792K NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries: [pip3] flake8==7.0.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.4 [pip3] numpydoc==1.5.0 [pip3] numpyro==0.12.1 [pip3] pytorch-lightning==2.0.2 [pip3] torch==2.0.1 [pip3] torch_cluster==1.6.3+pt23cu121 [pip3] torch_geometric==2.5.2 [pip3] torch_scatter==2.1.2+pt23cu121 [pip3] torch_sparse==0.6.18+pt23cu121 [pip3] torch_spline_conv==1.2.2+pt23cu121 [pip3] torchaudio==0.13.1 [pip3] torchmetrics==0.11.4 [pip3] torchvision==0.18.0 [pip3] triton==2.3.0 [conda] numpy 1.23.4 pypi_0 pypi [conda] numpy-base 1.26.4 py39h8a23956_0 [conda] numpydoc 1.5.0 py39h06a4308_0 [conda] numpyro 0.12.1 pypi_0 pypi [conda] pytorch-lightning 2.0.2 pypi_0 pypi [conda] pytorch-mutex 1.0 cpu pytorch [conda] torch 2.3.0 pypi_0 pypi [conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi [conda] torch-geometric 2.5.2 pypi_0 pypi [conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi [conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi [conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi [conda] torchaudio 0.13.1 py39_cpu pytorch [conda] torchmetrics 0.11.4 pypi_0 pypi [conda] torchvision 0.18.0 pypi_0 pypi [conda] triton 2.3.0 pypi_0 pypi