-
https://www.bilibili.com/video/av95315327/ 李宏毅NLP课程
https://www.bilibili.com/video/BV1gb411j7Bs?p=149 吴恩达课程
https://zhuanlan.zhihu.com/p/47108882 笔记
https://blog.csdn.net/u013733326/article/deta…
-
Hi! I was using your project dEFEND code (https://www.dropbox.com/sh/rzczwopo618jyv2/AAA1mI2yvbt6TAfqpcxfjL8va?dl=0) as reference for a project we are working on. But, unfortunately, running the go_d…
-
document_bert_architectures.py
class DocumentBertSentenceChunkAttentionLSTM
def forward(...)
bert_output 的size = (batch_size, max_seq_len, num_hiddens),其中有些数据未达到max_seq_len, 会是0,
[
[[x,x,...x]…
-
https://github.com/abisee/pointer-generator/blob/a7317f573d01b944c31a76bde7218bcfc890ef6a/attention_decoder.py#L173
Hi, abisee! I am confused about the codes in this line.
In your paper, the funct…
-
File "......./attention-OCR-master/src/model/seq2seq.py", line 75, in
linear = rnn_cell._linear # pylint: disable=protected-access
-
Thanks for this amazing work, can you please add a script for testing the saved lstm model with bert featurizer? @Gaurav-Pande
-
Hello, I am using the COCO dataset,
A two-layer LSTM model, one layer for top-down attention, and one layer for language models.
Extracting words with jieba
I used all the words in the picture de…
-
I'm attempting to train a custom `aocr` model on an internal dataset. I've labeled the data using a directory of images, and an annotation file as described in the README. This was converted to a data…
-
-
## 🐛 Bug
Batch size is hardcoded when tracing a model using custom for loop with `nn.LSTMCell`. This makes it not possible to run model inference with different batch sizes.
## To Reproduce
…