and then using a DataLoader with the collate_fn provided by deepSNAP. I found out the overall pipeline to be incredibly slower after the conversion, and found out that it was due to the NetworkX graph being collated along with the rest of the data.
I have currently resolved my issue by wrapping the collate_fn in this function where I am removing unwanted keys, such as G or any other data that I won't be using in the pipeline.
def from_data_list_ignore_keys(
data_list: List[Graph],
keys_to_ignore: List[str] = None,
follow_batch: List = None,
transform: Callable = None,
**kwargs
):
if keys_to_ignore is not None:
for key in keys_to_ignore:
for data in data_list:
data[key] = None
return Batch.from_data_list(data_list=data_list, follow_batch=follow_batch, transform=transform, **kwargs)
Am I doing something wrong or would it be reasonable to integrate the functionality to choose the keys to collate directly into the library?
I am converting a custom PyG dataset in the following manner
and then using a DataLoader with the
collate_fn
provided by deepSNAP. I found out the overall pipeline to be incredibly slower after the conversion, and found out that it was due to the NetworkX graph being collated along with the rest of the data. I have currently resolved my issue by wrapping the collate_fn in this function where I am removing unwanted keys, such asG
or any other data that I won't be using in the pipeline.Am I doing something wrong or would it be reasonable to integrate the functionality to choose the keys to collate directly into the library?