Closed CarloLucibello closed 3 years ago
About graph network, I suggest here is the paper to read. Especially, chapter 3 Graph networks.
About graph network, I suggest here is the paper to read. Especially, chapter 3 Graph networks.
yes I know that paper, which is what the GraphNet
paper implemented. The implementation in this PR is slightly more general. There are 2 differences:
M
is decoupled from the edge feature E
. This is because while all layers produce M
in the message passing, this doesn't have to be identified with E
, which most layers don't need to produce and store. GraphNet
behavior can be simply reproduced with
update_edge(l::Layer, M, E) = M
global_update
update function now takes the new E
and X
instead of the corresponding aggregated quantities. This way we don't have to pass around two more aggregating operators and keep a few odd aggregate(::typeof(min), ...)
definitions, users can well define their own aggregations.I think these are 2 positive changes, but they are not particularly relevant, I can revert them if you want.
I think this is good to go, I don't want to overcharge a single PR which is quite big already. We could adopt the checklist in the first post as a TODO list for the next release
I'm breaking this in 2 or 3 PRs. First on is #215
Close this due to merge of yuehhua/GraphSignals.jl#54.
This is just an investigation of what would entail having a COO implementation for the FeaturedGraph.
I reimplemented FeaturedGraph inside this repo, as a subtype of LightGraphs.AbstractGraphs. Tests aren't passing yet.
For the time being, this PR drops GraphSignals.jl as a dependence. If the investigation turns out to be successful I will move the code to GraphSignals.
UPDATE:
I have done a large redesign of the library, the code is much simpler, and overall performance should be much better (especially on gpu).
source
,target
(COO) edges' representation is the natural fit for message passing. Everything is handled in a very concise way byNNlib.gather
andNNlib.scatter
ChebConv
!adjacency_matrix
andnormalized_laplacian
can be expressed in a gpu friendly wayGCNConv
now has two implementations, both gpu friendly, one based on message passing one on multiplication by normalized laplacian. The second one is commented out, waiting for adjacency matrix storage support in FeaturedGraph (which is the case where it would make sense to use the laplacian algebra instead of message passing).GraphNet
andMessagePassing
types intoMessagePassing
. There seems to be no need (efficiency/flexibility/convenience) to have both.message
andupdate
functions now deal with batched node/edge features coming fromgather
andscatter
. Perfomance-wise, this is much better than the previous implementation relying onmapreduce
.Fix #185 fix #194 fix #195 fix #197 fix #200 fix #209
TODO list (this PR or future ones)
GCNConv
when using message passinggather/scatter
handle nicely the case of isolated nodesChebConv
with message passingscaled_laplacian
FeaturedGraph
support for (sparse) adjacency matrix underlying representationFeaturedGraph
implementation back toGraphSignals