Hi, First thanks for the fantastic project.
I'm new to graph neural networks and also, using PyTorch and PyTorch Geometric to implement them. for my project, I want to use torch geometric library convolutions like GCN to extract the embedding of nodes. I checked some tutorials of Torch Geometric and Torch Geometric Temporal libraries but I figured out that they have used either datasets containing single-feature multi-snapshot data or datasets containing multi-feature single-snapshot data. I want to ask are there any best practices or templates for how to use graph convolutions on multi-feature multi-snapshot Datasets?
p.s.: the datasets I want to use are PeMSBay and METR-LA that have the Size of:
(# of Nodes, # of Node Features, # of Snapshots)
Hi, First thanks for the fantastic project. I'm new to graph neural networks and also, using
PyTorch
andPyTorch Geometric
to implement them. for my project, I want to use torch geometric library convolutions likeGCN
to extract the embedding of nodes. I checked some tutorials ofTorch Geometric
andTorch Geometric Temporal
libraries but I figured out that they have used either datasets containing single-feature multi-snapshot data or datasets containing multi-feature single-snapshot data. I want to ask are there any best practices or templates for how to use graph convolutions on multi-feature multi-snapshot Datasets?p.s.: the datasets I want to use are
PeMSBay
andMETR-LA
that have the Size of: (# of Nodes
,# of Node Features
,# of Snapshots
)