Open CarloLucibello opened 3 years ago
For the same issue, there may be another approach to deal with this. Does parallelism being considered?
I'm not sure how we should handle global features, maybe we should just require them to be == nothing for all graphs as a start
I think the global feature can be batched up to pass layers. For example, an MLP?
For the same issue, there may be another approach to deal with this. Does parallelism being considered?
in GNN the graph size is essentially equivalent to the batch size, so yes graph concatenation is done in order to leverage parallelized operations
The docs suggest this has been implemented, but the issue being open suggests it has not. Can someone clarify this?
When training on multiple small graphs, typically one batches several graphs together into a larger graph for efficiency. This operation is called blockdiag in SparseArrays and LightGraphs.jl.
For
FeaturedGraph
s, node and edge features should be vertically concatenated in the resulting graph. I'm not sure how we should handle global features, maybe we should just require them to be== nothing
for all graphs as a start