FluxML / GeometricFlux.jl

Geometric Deep Learning for Flux
https://fluxml.ai/GeometricFlux.jl/stable/
MIT License
348 stars 30 forks source link

revisit GCNConv implementation #197

Open CarloLucibello opened 3 years ago

CarloLucibello commented 3 years ago

Currently, the GCNConv implementation computes in each forward a dense Laplacian matrix for the graph. This doesn't scale well for large graphs.
The layer should implement neighborhood aggregation and become a MessagePassing layer instead

yuehhua commented 3 years ago

In the forward of GCNConv layer, an algebraic computation is required, instead of implementing indexing-purposed neighborhood aggregation and a MessagePassing. To be honest, the use of message-passing scheme is not suitable for this kind of GNN layers and the implementation in pytorch geometric doesn't benefit for computation efficiency. The approach that pytorch geometric take is just force filling GCNConv layer in the message-passing scheme, which is totally not required. Not only GCNConv layers but also ChebConv layers require algebraic computation, not indexing neighbors.

yuehhua commented 3 years ago

Instead, consider scaling to large graph, a sparse array support should be considered. A sparse adjacency matrix should be accepted as a graph representation and sparse computation over CPU and GPU should also supported. For extremely large case, distributed computing should be considered. For example, Alibaba develops a distributed graph deep learning framework for recommendation systems.