FluxML / GeometricFlux.jl

Geometric Deep Learning for Flux
https://fluxml.ai/GeometricFlux.jl/stable/
MIT License
348 stars 30 forks source link

implement graph concatenation #218

Open CarloLucibello opened 3 years ago

CarloLucibello commented 3 years ago

When training on multiple small graphs, typically one batches several graphs together into a larger graph for efficiency. This operation is called blockdiag in SparseArrays and LightGraphs.jl.

For FeaturedGraphs, node and edge features should be vertically concatenated in the resulting graph. I'm not sure how we should handle global features, maybe we should just require them to be == nothing for all graphs as a start

yuehhua commented 3 years ago

For the same issue, there may be another approach to deal with this. Does parallelism being considered?

I'm not sure how we should handle global features, maybe we should just require them to be == nothing for all graphs as a start

I think the global feature can be batched up to pass layers. For example, an MLP?

CarloLucibello commented 3 years ago

For the same issue, there may be another approach to deal with this. Does parallelism being considered?

in GNN the graph size is essentially equivalent to the batch size, so yes graph concatenation is done in order to leverage parallelized operations

eahenle commented 1 year ago

The docs suggest this has been implemented, but the issue being open suggests it has not. Can someone clarify this?