issues
search
idiap
/
fast-transformers
Pytorch library for fast transformer implementations
1.65k
stars
179
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Support for 16-bit Floats
#32
anti4m
opened
4 years ago
7
What is best way to perform recurrent sampling while training?
#31
hadaev8
closed
3 years ago
27
Relative Position Representations
#30
burcehan
closed
4 years ago
5
Masking extension
#29
angeloskath
opened
4 years ago
1
feature map function φ (x) = elu(x) + 1,
#28
burcehan
closed
4 years ago
4
Segment-Level Recurrence with State Reuse
#27
burcehan
closed
4 years ago
3
n_heads param in TransformerEncoderLayer unused and inconsistent with TransformerDecoderLayer
#26
hadaev8
closed
4 years ago
2
Error with recurrent attention ValueError: too many values to unpack (expected 2)
#25
hadaev8
closed
4 years ago
11
causal_product_cuda.cu
#24
burcehan
closed
4 years ago
0
causal_product_cuda.cu,Error compiling objects for extension
#23
burcehan
closed
4 years ago
4
Regarding arbitrary mask
#22
netw0rkf10w
closed
4 years ago
6
Its not convient (or possible) to use checkpointing in pytorch because masks is not tensor type objects
#21
hadaev8
opened
4 years ago
4
How to visualize attention weights?
#20
hadaev8
closed
4 years ago
17
RuntimeError: CUDA error: an illegal memory access was encountered
#19
annahung31
closed
4 years ago
1
Cant install on colab
#18
hadaev8
closed
4 years ago
3
Positional embedding
#17
horiacristescu
closed
4 years ago
2
Problems Installing on Debian GNU/Linux 10 (buster) using Python 3.7.3
#16
maximilianreimer
closed
4 years ago
5
Please Add Pytorch to Be Automatically Installed
#15
maximilianreimer
closed
4 years ago
2
Image generation/completion
#14
lorinczszabolcs
closed
4 years ago
2
Ensure all forward() methods have a proper description of input tensors
#13
angeloskath
opened
4 years ago
0
No module named 'fast_transformers.causal_product.causal_product_cpu' (solved: needed to at CUDA to the PATH)
#12
ghost
closed
4 years ago
8
TO DO in Causal Linear denomenator
#11
eelxpeng
closed
4 years ago
1
Replace inplace operations in recurrent linear attention
#10
TariqAHassan
closed
4 years ago
1
fix extremely minor typo
#9
bionicles
closed
4 years ago
0
please add comments with meaning of einsum dimensions
#8
bionicles
closed
4 years ago
2
n_heads parameter of RecurrentTransformerEncoderLayer is not currently used
#7
TariqAHassan
closed
4 years ago
5
Unable to install extensions
#6
TariqAHassan
closed
3 years ago
3
Experimental Code - CTC Loss issue.
#5
iiSeymour
closed
4 years ago
4
Training
#4
Uglj
closed
4 years ago
3
Encoder-decoder setup?
#3
ghost
closed
4 years ago
17
hash_cuda.cu: No such file or directory
#2
pathway
closed
4 years ago
5
Why this implementation is fast and how fast is it?
#1
jiqiujia
closed
4 years ago
2
Previous