issues
search
lucidrains
/
memorizing-transformers-pytorch
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in Pytorch
MIT License
620
stars
46
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
stable_softmax
#16
huu4ontocord
closed
11 months ago
3
is there any environment detial for use this package ?
#15
sertg5k4ll
opened
11 months ago
0
is it a t5 arch or decoder only gpt style arch?
#14
brando90
opened
1 year ago
1
hugging face training code with demo
#13
brando90
opened
1 year ago
0
official repo?
#12
brando90
opened
1 year ago
1
Update KNNMemory.add to correctly index and update KNN list, after ad…
#11
LakshyAAAgrawal
closed
1 year ago
1
KNNMemory add() does not appear to update self.knns
#10
vyaivo
closed
1 year ago
0
FAISS hard reset
#9
itsdaniele
opened
1 year ago
0
Update train.py to correct implementation of val loss calculation
#8
LakshyAAAgrawal
closed
1 year ago
0
index out of
#7
chxiag
opened
1 year ago
0
Support for Multi-GPU training?
#6
Victorwz
opened
1 year ago
0
Dimensionality of key and values for Attention
#5
manestay
opened
2 years ago
8
Arguments to reproduce the models from the original paper?
#4
manestay
closed
2 years ago
1
Maybe scale is wrong
#3
denadai2
opened
2 years ago
3
try using joblib to parallelize adding new memories to faiss indices
#2
lucidrains
closed
2 years ago
0
Any interesting results?
#1
rom1504
opened
2 years ago
79