yangxuntu / SGAE

220 stars 48 forks source link

A Question About Graph Convolution Layers and Embeddings #32

Open malsulaimi opened 3 years ago

malsulaimi commented 3 years ago

Thank you for the great work .

In you paper you mentioned : " We use four spatial graph convolutions: g.r, g.a, g.s, and g.o for generating the above mentioned three kinds of embeddings. In our implementation, all these four functions have the same structure with independent parameters: a vector concatenation input to a fully-connected layer, followed by an ReLU. "

I'm a bit confused here. I have first to admit that I'm new to graph networks and not aware of all possible implementation of graph convolution. But my understanding from your paragraph above. Is that your convolution operations in only vector concatenation of node embedding (e) multiplied by some weights and then applied none linearity. Am I correct here ? there is no Adjacancy matrix and feature matrix involved in the operation.

Another questions about the node emdeddings eo , ea ..etc. Do you use any pre-trained embedding to start with ? or you simply train these embedding from skratch.

Thank you very much.