-
Anything i type return the answer `- [6.5] ` . It was trained only once (prepare_data.py and train.py), with a conversation of 1200 lines. I'm using python 3.7 and tensorflow 1.14.0 in a cpu.
…
-
I saw that there is a Hierarchical Attention Network model included in the directory: reproduction/text_classification/model/HAN.py.
I realized that the input for HAN is different from other models (…
-
Thanks for your awesome contribution. I was wondering whether I can use this to achieve visual attention. I was thinking of using the seq2seq with attention and feeding the convnet's flatten layer as …
-
Hi
Since I don't have access to GPU, I can't execute your code, but there is another code in the github that implements your model with the keras Library . Are you confirming the following code and c…
-
## Detailed Description
Given the absence of specific data for the model, I proceeded to predict photovoltaic power generation by leveraging preprocessing and feature engineering techniques usi…
-
Hi,I have a question in self-attention.ipynb about this code as below:
out = out[:, :, :self.hidden_dim] + out[:, :, self.hidden_dim:]
Why do you add the two hidden states of Bidirectional LST…
-
I am having a bit of trouble understanding how to incorporate the AttentionLSTM layer into my code. In your blog you have said that *"The attentional component can be tacked onto the LSTM code that al…
-
The output of DecoderRNN-T is combined with 4 dimensions, how to use it to recognize speech? Besides, the auther make the model architecture with the LAS? Such as: Conformer-Encoder, LSTM-Decoder, A…
wszyy updated
2 years ago
-
Hi, thanks for sharing the code!
It looks like the location softmax implemented in the conditional lstm is not the one you describe in the paper 'Action Recognition with Visual Attention' (eq. 4), but…
-
If I want to use the attention rnn with this lib.
Is there anything examples or guides for building a attention rnn?
D-X-Y updated
7 years ago