issues
search
jayparks
/
transformer
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
543
stars
122
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
This is a piece of junk code don't look at it anymore!
#18
Ant0082
opened
1 year ago
0
Can't run the whole project
#17
MsTao-68
opened
2 years ago
0
**difference** between paper and your code
#16
yuanyihan
opened
3 years ago
0
I don`t want to debug...
#15
laichengen
opened
3 years ago
0
how to generate train, src, tgt,or how to run the code
#14
wangyy161
opened
3 years ago
2
Some error in position encoding
#13
HN123-123
opened
3 years ago
0
why dose this repo use the earlier labels as the input of Decoder?
#12
qq563902455
opened
3 years ago
0
improved transformer modules
#11
Sa-asus
opened
3 years ago
0
error in Linear
#10
baoxin1100
opened
4 years ago
3
Thanks for your sharing, but I have a question that how to use it ?
#9
zhhhzhang
opened
4 years ago
2
Thanks for your sharing, but I have a question that how to use it ?
#8
hellokevin96
opened
5 years ago
0
How to keep constrains of sum(k)=1 and sum(α)=1?
#7
sunzewei2715
opened
5 years ago
0
How to add some functionalities to this code?
#6
liperrino
opened
5 years ago
1
TypeError: on __init()__ missing require positional argument: out_features
#5
liperrino
opened
5 years ago
3
only integer tensors of a single element can be converted to an index
#4
ShangYuming
opened
5 years ago
0
Usage of this repository
#3
liperrino
opened
5 years ago
0
RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.LongTensor for argument #3 'index'
#2
GITJolly
opened
5 years ago
2
ModuleNotFoundError: No module named 'torchtext'
#1
pemywei
opened
5 years ago
1