Open wen0618 opened 4 years ago
Sorry for the late reply.
The value is a linear transformation that transforms feature before skip connection. Please refer to g
in fig 2 of NonLocal, V
in fig 2 left of Attention is all your need, or W_V
in fig 2 right of Relation Network for more details.
Btw, it's good to hear that GCNet works. Thanks for your interest.
Sorry for the late reply.
The value is a linear transformation that transforms feature before skip connection. Please refer to
g
in fig 2 of NonLocal,V
in fig 2 left of Attention is all your need, orW_V
in fig 2 right of Relation Network for more details.Btw, it's good to hear that GCNet works. Thanks for your interest.
Thank you very much for your answer!According to my understanding,, we can think of its value as the importance of the channel, can we?By the way,I think GCNet is a kind of channel attentional mechanism,right?
By value of transform module, are you referring to the output of Global Context Block before residue connection?
In this case, we are not explicitly using channel-wise attention. The output doesn't necessarily imply the importance.
Btw, he SENet could be one example of channel attentional mechanism.
Moreover, the main idea of GCNet is to explicitly capture the global context, as the name indicates.
I used GCNet in my model and it 's very good.But I what to know what‘s the value of transform module mean?Is it suggest the importance of each channel?The lower the value, the less important it is?Hope your anwser ,thanks!