If I try to create a StochasticBlockModelDataset with more than around 6-8 blocks, I get a File name too long error because the block sizes are individually copied into the filename string. I'm interested in a specific scenario where there are many blocks (think >16) that all have the same size. In this case, the processed_file_names function could just count how many blocks of the same size appear in the graph and list "degeneracy counts" next to each block size in the hash.
Environment
PyG version: 2.1.0
PyTorch version: 1.12.1
OS:
Python version: 3.10.4
CUDA/cuDNN version:
How you installed PyTorch and PyG (conda, pip, source): conda
Any other relevant information (e.g., version of torch-scatter):
🐛 Describe the bug
If I try to create a
StochasticBlockModelDataset
with more than around 6-8 blocks, I get aFile name too long
error because the block sizes are individually copied into the filename string. I'm interested in a specific scenario where there are many blocks (think >16) that all have the same size. In this case, theprocessed_file_names
function could just count how many blocks of the same size appear in the graph and list "degeneracy counts" next to each block size in the hash.Environment
conda
,pip
, source): condatorch-scatter
):