DeepRNN / image_captioning

Tensorflow implementation of "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"
MIT License
783 stars 353 forks source link

Calculation of attention #16

Open wushilian opened 6 years ago

wushilian commented 6 years ago

In the code,you use fully_connect layer to calculate attention,why don't use formula in <>?

wushilian commented 6 years ago

1

fanw52 commented 6 years ago

hello,do you understand the method that full connection layers is used in getting initial variables?? @wushilian

fanw52 commented 6 years ago

and there still exists a question ,how to understand the way that use MLP to get the weights of attention?? @wushilian @DeepRNN

fanw52 commented 6 years ago

how about using the mean of conv feature directly?? @DeepRNN @wushilian

weixijia commented 6 years ago

Which Attention did the author used in his code? Stochastic “Hard” Attention or Deterministic “Soft” Attention? @wushilian @kstys @DeepRNN @whguan

fanw52 commented 6 years ago

I think it's the soft attention @jsd1994wxj