-
Hi there.
I read your paper in 2020 and am doing a replication experiment.
The training and testing of the data you distributed went without errors.
However, I cannot reproduce these figure present…
kyo44 updated
1 month ago
-
```
Traceback (most recent call last):
File "/usr/local/bin/paddle2onnx", line 10, in
sys.exit(main())
File "/usr/local/lib/python3.6/dist-packages/paddle2onnx/command.py", line 142, in m…
-
What should I do if I follow the attention layer behind the CRF layer?
qinya updated
6 years ago
-
在訓練 LSTM 時,python main.py --validate -c applications/FootballAction/train_proposal/configs/lstm_football.yaml -o output_dir=applications/FootballAction/checkpoints/LSTM
請問各位大神下列要如何解決?搞了好久了都沒結果
W0…
-
After the most recent update I'm getting error while using LSTM with BERT. Can someone please help me resolve this.
![image](https://user-images.githubusercontent.com/48489592/80276900-285f2180-8709-…
-
Hello I have got intrigued on seeing this paper:
I'm trying to make a object tracker I would like to use your work along with [detection using transformers](https://github.com/facebookresearch/detr) …
-
## 論文リンク
https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
## 公開日(yyyy/mm/dd)
2018/06/11
## 概要
いわゆる GPT-1 の論文。
言語モデルの教師なし事前学習がその後の教師あり fine-tuning…
-
BERT-type: uncased_L-12_H-768_A-12
Batch_size = 8
BERT parameters:
learning rate: 1e-05
Fine-tune BERT: True
vocab size: 30522
hidden_size: 768
num_hidden_layer: 12
num_attention_heads: 12
hi…
-
I read the paper "Leveraging Context Information for Natural Question Generation".
Section 2.2 says:
> Each encoder state hj is the concatenation of two bi-directional LSTM states
(Section 3.2 …
-
**One-line summary (optional)**
- Learn the which architectures(layers, dropout rates etc.) are used by Melody-Rnn and FolkRNN.
**Reason**
- We would like to reproduce at least one of RNN based metho…