Hi~
Some other details I want to know, and look forward to your reply.
In the paper you said: For graphs with vertex labels or attributes, X can be the one-hot encoding matrix of the vertex labels or matrix of multidimensional vertex attributes. so if the graph with 3 nodes without any node attribute, the X can represent as follow:
np.eye(3) =
[[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]
and X.shape = (n, c), and c exactly equal to n, right?
if the answer is positive, what puzzle me is the dimension of W(W.shape = (c/n, c')) in the first graph convolution layer will with different size graph by graph, but the W are shared among all input. How to explain and deal with this?
Second, I noticed the optimization method you used to minimize the loss function is SGD with ADAM, what i want to know is whether we can't vectorization in this model, i.e. the batch_size only can be set to 1?
Hi~ Some other details I want to know, and look forward to your reply. In the paper you said:
For graphs with vertex labels or attributes, X can be the one-hot encoding matrix of the vertex labels or matrix of multidimensional vertex attributes.
so if the graph with 3 nodes without any node attribute, the X can represent as follow:and
X.shape = (n, c)
, andc
exactly equal ton
, right? if the answer is positive, what puzzle me is the dimension of W(W.shape = (c/n, c')
) in the first graph convolution layer will with different size graph by graph, but the W are shared among all input. How to explain and deal with this?Second, I noticed the optimization method you used to minimize the loss function is
SGD with ADAM
, what i want to know is whether we can't vectorization in this model, i.e. the batch_size only can be set to 1?thanks sincerely!