Cartus / AGGCN

Attention Guided Graph Convolutional Networks for Relation Extraction (authors' PyTorch implementation for the ACL19 paper)
MIT License
432 stars 88 forks source link

h_out's dimension problem ?? #14

Closed antxyz closed 4 years ago

antxyz commented 4 years ago

As you mentioned in your paper, h_out belongs to d N. Shouldn't he belong to (d n) * n?

Cartus commented 4 years ago

Hi @antxyz ,

I am a little bit lost... Could you explain more about your question?

antxyz commented 4 years ago

image In your AGGCN article, blocks are composed of attention guide layer, dense connection layer and linear combination layer. I want to know whether the dimension of input h_out={h1, h2,..., hn} of linear combination layer is dN or (dN)*N?

Cartus commented 4 years ago

I see. The input is d x N.

d is the input dimension of the densely connected layer. You can consider the densely connected layer as a black box. The output dimension of the black box is still d, no matter what the calculation happened inside the box.

Assume you have N different adjacency matrices generated by the attention guided layer. Then you have N different GCNs (densely connected and parameters are not shared) to encode them.

We want to have 1 final representation. Therefore, we simply convert these N outputs (each one has dimension d) into 1 by using a linear transformation.

antxyz commented 4 years ago

I got it. Thank you very much.