Closed Ha0Tang closed 4 years ago
Hi, thanks. What's the tensor shape of emb1, emb2, emb1_new, and emb2_new?
My task is, input two feature maps F1, F2 (both are N X C X H X W) and aim to output two updated feature maps F1', F2' (both are still N X C X H X W). I would like to use Algorithm 1 proposed in your paper to do this task. Do you have any suggestions? Thanks.
The shapes of emb1, emb2, emb1_new, and emb2_new should be (N x H x C) where N is the number of batches, H is the number of nodes in one graph and C is the size of the feature channel.
For your task, you may transpose and reshape your F1, F2 into (N x HW x C) (if I don't get these notations wrong), and it should work with my implementation.
Thanks, and what ns_scr
and ns_tgt
in Eq. s = self.voting_layer(s, ns_src, ns_tgt)
stand for?
ns_scr
and ns_tgt
stands for the number of nodes in two graphs, as we allow different numbers of nodes in single batch.
We have HW
nodes in each graph, so ns_scr
and ns_tgt
should be set HW
?
When I am carrying out s = self.voting_layer(s, H*W, H*W)
, I got the following error:
for b, n in enumerate(nrow_gt): TypeError: 'int' object is not iterable
It should be a N
-sized tensor containing H*W
.
Are you mean the size of ns_scr
and ns_tgt
should be two dimensions, i.e., N x HW
? (where H
and W
means height and width of the input feature, N
is the number of batches)
No, it should be [HW, HW, HW, ...]
containing N HW
Thanks, what A_src
, A_tgt
, G_src
, H_src
, G_tgt
and H_tgt
stand for?
Since the features F1 and F2 are directly extracted from images in my task, do you think I need to execute emb1, emb2 = gnn_layer([A_src, emb1], [A_tgt, emb2])
before calculating the affinity?
No, you needn't. A_src
, A_tgt
, etc are all about the graph connectivity, so you needn't care about them since you are working with images.
Thank you for your interest.
https://github.com/Thinklab-SJTU/PCA-GM/blob/master/PCA/model.py#L72 from L72 to L81.