-
@BriansIDP Hi~ Firstly, appreciate your kindly sharing. I have some questions about this implementation, and hope for your replies.
1. I notice that the default "layerlist" is "0", and I wonder if …
-
I used attention here.
```
def decoding_layer(dec_input,encoder_outputs, encoder_state,source_sequence_length,
target_sequence_length, max_target_sequence_length,
…
-
Hi, thanks for sharing the code. I expect the alpha_visualization.ipynb to visualize hard attention just like what the paper represented. But I trained a stochastic one and visualize it , then I got v…
-
## 一言でいうと
自然言語における感情分析はネガポジの様に二値で行うことが多いが、これでは感情の機微を表現できない。ただ、多感情にするとラベル付けが大変。そこで、Twitterにつけられた絵文字を予測させる形で学習を実行。12億ツイート(!!)でbi-LSTM+Attentionのモデルを学習。
### 論文リンク
https://arxiv.org/abs/1708.0052…
-
Hi, I'm trying to implement the Deep Recurrent Attention Model described in the paper http://arxiv.org/pdf/1412.7755v2.pdf to apply to image caption generation instead of image classification. I will …
-
When I use LSTMModule as the input_modules ,there will be a error: **UnboundLocalError: local variable 'output_dim' referenced before assignment**
![image](https://user-images.githubusercontent.com/2…
-
When trying to run aligner.py (after the prepro code to get snli data, vocab, word embedding) I get the following error. Could you please provide the precomputed word embeddings with the correct numbe…
-
[This paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4189457) is connected to the new minister of communications ([Sattar Hashemi](https://x.com/HashemiSattar)) in Iran: https://x.com/ircf…
irgfw updated
2 months ago
-
## Tokenizers
| name | avg_sentence_len | max_sentence_len | min_sentence_len | used_tokens | vocab_size |
|--------------------|------------------|------------------|---------------…
-
I have my data set of 100 features and 1 target variable in an numpy array.. How to give it as an input to this project... Thanks