Closed antxyz closed 4 years ago
Hi @antxyz ,
I am a little bit lost... Could you explain more about your question?
In your AGGCN article, blocks are composed of attention guide layer, dense connection layer and linear combination layer. I want to know whether the dimension of input h_out={h1, h2,..., hn} of linear combination layer is dN or (dN)*N?
I see. The input is d x N.
d is the input dimension of the densely connected layer. You can consider the densely connected layer as a black box. The output dimension of the black box is still d, no matter what the calculation happened inside the box.
Assume you have N different adjacency matrices generated by the attention guided layer. Then you have N different GCNs (densely connected and parameters are not shared) to encode them.
We want to have 1 final representation. Therefore, we simply convert these N outputs (each one has dimension d) into 1 by using a linear transformation.
I got it. Thank you very much.
As you mentioned in your paper, h_out belongs to d N. Shouldn't he belong to (d n) * n?