danielegrattarola / spektral

Graph Neural Networks with Keras and Tensorflow 2.
https://graphneural.network
MIT License
2.37k stars 334 forks source link

accessing to the learned edge matrix #83

Closed andreapi87 closed 4 years ago

andreapi87 commented 4 years ago

Hi all! As I undestand, there are convolutional levels which learn also the Edge matrix (so, the graph structure), as for example the EdgeConditionedConv layer. So, two questions: 1) the first question is just a personal curiosity: what is the utility to give as input the adjacency matrix together with the edge feature matrix? In other terms, what is the utility of the Adjacency matrix A if the "connection" between nodes can be derived by the Edge matrix E? 2) more important: is possible to access to the learned edge matrix (so, to know the learned graph structure)? Thank you for your attention and for this wonderful work you are doing!

PS It seems that the github link from the main site https://graphneural.network/ is broken

danielegrattarola commented 4 years ago

Hi,

I'm not sure that we're on the same page with regards to the EdgeConditionedConv layer.

ECC does not learn the edge matrix (which usually refers to the task of learning the connections between nodes), but uses the edge features matrix to compute a representation of the nodes. In other words, ECC is just like GCN, with the added caveat that it uses edge features to compute the message passing operation (so for instance, if two people are connected in a social network, ECC would also consider which type of connection it is -- friends, colleagues, family, etc.).

With that said, to answer your questions:

  1. We pass both A and E because ideally they represent two different things. Matrix A gives you the structure ("this is connected to that") while matrix E gives you the attributes of the connection. In other words, if an attributed edge is a triple (i, j, e), then (i, j) is stored in A, and (e) is stored in E. This gives a cleaner API overall.

  2. The answer to this depends on my comments above. There is no "learned" edge matrix, everything is given a priori. You can get the learned node representation by simply calling the layer on the input, but I'm not sure that this is what you want.

Thanks for the positive feedback and let me know if you need further clarifications on this. Also, I could not find the broken link, which one are you referring to?

Cheers

andreapi87 commented 4 years ago

thank you for your detailed answer!

Best regards!

danielegrattarola commented 4 years ago

Thanks for spotting the broken link(s), I'll remove them in the next update of the docs.

Regarding learning the adjacency matrix, you may want to look up "graph learning". A very popular paper on the subject is "Neural Relational Inference for Interacting Systems" (https://arxiv.org/abs/1802.04687), which is physics-oriented. Not sure if that's what you need, but it's a good starting point.

Everything required to implement that paper (an, in general, graph learning algorithms) should be available in Spektral already. Let me know if something is missing.

Cheers

andreapi87 commented 4 years ago

thanks for your answer. However, I think that for my case this work https://www.sciencedirect.com/science/article/abs/pii/S0031320319302432 is more suitable. Do you know if it is already implemented something similar?

danielegrattarola commented 4 years ago

I have not implemented this exact algorithm in Spektral but the core building blocks should all be there.

I was not able to find the code associated with that paper, maybe you can try sending an email to the authors and see if they can share it.

Cheers