issues
search
alexzhang13
/
flashattention2-custom-mask
Triton implementation of FlashAttention2 that adds Custom Masks.
Apache License 2.0
62
stars
5
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Questions about bf16 support and correctness check
#12
xiabingquan
opened
5 days ago
1
Implementing Standard Interface for _flash_attn_forward and _flash_attn_backward Functions
#11
chenlidar
opened
1 week ago
0
Support float trainable masks
#10
michal-kustosz
opened
1 week ago
0
Will there be speedup if the mask is sparse?
#9
ThisisBillhe
closed
1 week ago
0
pytorch 12.4
#8
alita-moore
opened
3 weeks ago
0
precision issue
#7
dyhBUPT
opened
4 weeks ago
2
Revert "Modify type conversion and mask logic"
#6
alexzhang13
closed
1 month ago
0
Modify type conversion and mask logic
#5
Uwwal
closed
1 month ago
2
Problems with running the small example in readme file
#4
eigenvectorBazuz
closed
1 month ago
2
Significant Errors in Forward Pass When Not Using Mask
#3
Uwwal
closed
1 month ago
6
illegal memory access with seqlen = 2048 and dimension = 64
#2
Uwwal
opened
1 month ago
4
Comparison to FlexAttention
#1
michaelfeil
opened
1 month ago
2