-
Dear Jiasen Lu,
Thank you for your work on "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning".
I am writing to ask about the "visual sentinel": what is th…
-
-
Hello @DeepRNN !
I took a look at attentions that model generates in test mode.
I did the following: in `base_model.py:200` i changed the code as following
```
memory, output, scor…
-
I am curious about the motivation for [this step](https://github.com/atulkum/pointer_summarizer/blob/5eb298a66b4d3b53adb854ba9b5c82580cf2fa1e/training_ptr_gen/model.py#L151), as I couldn't anything ab…
-
my params are {
"cell_type": "lstm",
"depth": 2,
"attention_type": "Luong",
"bidirectional": true,
"use_residual": true,
"use_dropout": false,
"time_major": true,
…
-
Hi @spro, i've read your implementation of luong attention in pytorch seq2seq translation tutorial and in the context calculation step, you're using rnn_output as input when calculating attn_weights …
-
## TL;DR
How to implement DeepMoji from scratch in pyTorch.
### Article Link
[Understanding emotions - from Keras to pyTorch](https://medium.com/huggingface/understanding-emotions-from-keras-…
-
-
[This continues the discussion in #12.]
Both the transducer and the pointer-generator treat features in architecture-specific ways; issue #12 deals with their ideal treatment in the transducer, sin…
-
https://keras.io/layers/convolutional/
**to look into:**
keras flatten layer, permute layer, reshape layer, repeat vector
[pure size and price, potentially position also]
take l2 states as g…