issues
search
graykode
/
nlp-tutorial
Natural Language Processing Tutorial for Deep Learning Researchers
https://www.reddit.com/r/MachineLearning/comments/amfinl/project_nlptutoral_repository_who_is_studying/
MIT License
14.31k
stars
3.95k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
The comment in the Bi-LSTM (Attention) model has an issue.
#84
tmracy
opened
2 months ago
0
Update Transformer.py
#83
Yujia2415
opened
9 months ago
1
tensorflow 2.x.x version from TexCNN
#82
seunggihong
opened
1 year ago
0
Seq2Seq(Attention) may have a mistake
#81
mianman1017
closed
1 year ago
0
Update TextCNN.py
#80
ZhiweiCheng2020
opened
1 year ago
0
add text
#79
xxz-wow
opened
1 year ago
0
New comment
#78
lilydav
closed
1 year ago
0
The Learning Rate in 5-2.BERT must be reduced.
#77
Cheng0829
opened
2 years ago
0
The Adam in 5-1.Transformer should be replaced by SGD
#76
Cheng0829
opened
2 years ago
0
Faster attention calculation in 4-2.Seq2Seq?
#75
shouldsee
opened
2 years ago
1
BiLstm(tf) maybe have mistake
#74
cui-z
opened
2 years ago
0
5.1 Transformer may have wrong position embed
#73
JiangHan97
opened
2 years ago
0
3-3.Bi-LSTM may have wrong padding
#72
ETWBC
opened
3 years ago
0
LongTensor error dim in BiLSTM Attention with new data
#71
koaaihub
opened
3 years ago
0
Modifying Seq2Seq.py to test prediction
#70
karim-moon
closed
3 years ago
0
In code 4-1.Seq2Seq might have wrong section
#69
karim-moon
closed
3 years ago
0
Bi-LSTM attention calc may be wrong
#68
liuxiaoqun
opened
3 years ago
2
About make_batch of NNLM
#67
KODGV
opened
3 years ago
0
Why is src_len+1 in Transformer demo?
#66
Yuanbo2021
opened
3 years ago
1
Question?
#65
RaySunWHUT
opened
3 years ago
0
CODE
#64
Developer-Prince
opened
3 years ago
0
Fix comment errors in NNLM
#63
secsilm
closed
3 years ago
1
Add "Launch in Deepnote" button
#62
danzvara
closed
3 years ago
1
Update BERT.py
#61
colinpeth
opened
4 years ago
0
Update README.md
#60
charchitd
opened
4 years ago
0
link of NNLM and word2vec is disabled
#59
workerundervirus
opened
4 years ago
0
2.0
#58
graykode
closed
4 years ago
0
Version 2.0 will be updated
#57
graykode
closed
4 years ago
0
BERT-Torch.py may have a small mistake
#56
lucenzhong
closed
4 years ago
0
a question about transformer
#55
luojq-sysysdcs
opened
4 years ago
1
about seq2seq(attention)-Torch multiple sample training question
#54
wmathor
opened
4 years ago
0
seq2seq_torch maybe have a small mistake
#53
wmathor
closed
1 year ago
2
Attention BiLSTM
#52
elsheikh21
closed
4 years ago
0
Update NNLM-Torch.py
#51
wmathor
closed
4 years ago
1
fix bi-LSTM hidden and cell state shape comments
#50
Yuhuishishishi
closed
4 years ago
1
Fix textCNN shaope comments typo
#49
Yuhuishishishi
closed
4 years ago
1
May be a small mistake
#48
zhangyikaii
opened
4 years ago
1
3-3-bilstm-torch comment error
#47
Tonybb9089
opened
4 years ago
1
how to use seq2seq(attention) for multiple batch
#46
jbjeong91
opened
4 years ago
1
Problem with BERT batch generation
#45
aqibsaeed
opened
4 years ago
1
TextCNN_Torch have wrong comment
#44
jnakor
opened
4 years ago
3
Which kind of model is better for keyword-set classification?
#43
guotong1988
opened
4 years ago
0
seq2seq(attention) have wrong comment
#42
nomorecoke
opened
5 years ago
0
refactor: First update
#41
thunderboom
closed
5 years ago
0
Some problems about Bert
#40
tfighting
opened
5 years ago
2
the writter's Transformer torch.py 's position embed has some mistake.
#39
zhangbo2008
opened
5 years ago
0
Question about tensor.view operation in Bi-LSTM(Attention)
#38
iamxpy
opened
5 years ago
1
FIX transformer init
#37
wking-tao
closed
4 years ago
1
A questions about decoder in seq2seq-torch
#36
acm5656
opened
5 years ago
1
Transformer/Transformer(Greedy_decoder)-Torch.py on gpu
#35
kangkang61
opened
5 years ago
2
Next