issues
search
facebookresearch
/
xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
https://facebookresearch.github.io/xformers/
Other
8.67k
stars
618
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
visual studio 2022,cuda 12.4 ,but it still can't build the wheel, and return this
#1161
zslefour
opened
11 hours ago
1
[refactor] Generalized SwiGLU python code
#1160
warpuv
opened
20 hours ago
0
Incompatibility Between xformers FA3 Torch Custom op Wrapper and recent `flashattn_hopper_cuda`
#1159
ohwi
opened
20 hours ago
1
Generalize SwiGLU related python code
#1158
warpuv
opened
21 hours ago
0
[FA3] Link to cuda library to fix the FA3 extension build
#1157
xuzhao9
closed
1 day ago
2
Triton Module Not Available: xformers Optimizations Failing on Windows with CUDA 12.4
#1156
BasimBashir
closed
2 days ago
1
need to release xformers-0.0.28.post3.whl with manylinux2014_x86_64 os version.
#1155
controlRun
opened
3 days ago
1
ValueError: not enough values to unpack (expected 2, got 1)
#1154
algorithmconquer
closed
1 week ago
1
RuntimeError: Found an unsupported argument type c10::SymInt in the JIT tracer. File a bug report.
#1153
J4Q8
closed
2 days ago
2
[fix] Fix activation checkpointing of SwiGLU when AMP is enabled.
#1152
warpuv
closed
1 week ago
1
Activation checkpointing on fused SwiGLU is not working when AMP is enabled.
#1151
warpuv
closed
1 week ago
0
TORCH_CUDA_ARCH_LIST: 8.0+PTX not being detected properly
#1150
HighSec-org
closed
1 week ago
1
fix: [N, 1] pattern should take N to create 1d causal mask
#1149
davidqqq
opened
1 week ago
2
Can't install xformers
#1148
GUEST-1001
closed
1 week ago
1
Customization of BlockDiagonalMask or LowerTriangularMask
#1147
kc334
opened
1 week ago
0
Can I use xformers with torch-2.4.1?
#1146
LukeLIN-web
closed
1 week ago
1
build without flash attn?
#1145
sipie800
opened
2 weeks ago
1
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
#1144
antony-frolov
closed
2 weeks ago
1
# ❓ Questions and Help NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
#1143
Was-11-wendy
closed
2 weeks ago
2
How to install to aarch64
#1142
gongchangsui
opened
2 weeks ago
1
[refactor] Generalization of dual_gemm_silu_identity_mul
#1141
warpuv
closed
2 weeks ago
3
Generalization of dual_gemm_silu_identity_mul to use custom activation function, not only SiLU
#1140
warpuv
closed
2 weeks ago
0
Cann't install xformers==0.0.28.post2
#1139
Wiselnn570
closed
3 weeks ago
1
vllm 0.6.3 createLLM error TypeError: autotune() got an unexpected keyword argument 'use_cuda_graph' on windows
#1138
xiezhipeng-git
opened
3 weeks ago
4
torch2.5.1 support
#1137
luohao123
closed
3 weeks ago
4
Update xFormers MinGPT notebook reference
#1136
emmanuel-ferdman
closed
1 week ago
1
incorrect causal mask in global attention
#1135
davidqqq
opened
1 month ago
1
timm.models.layers is deprecated/resume_download is deprecated
#1134
zd391
closed
4 weeks ago
1
[fix] Add back BlockDiagonalMask import
#1133
tanvitiwari-meta
closed
3 weeks ago
2
Cant use powershell command to build for Nightly PyTorch on Win - filename too long
#1132
Mescalamba
opened
1 month ago
5
Is there an efficient way to use memory_efficient_attention with a causal mask that has a small rectangle of zeros?
#1131
arilato
opened
1 month ago
1
AMD64 version of Xformers
#1130
pandayummy
closed
1 month ago
3
AttributeError: module 'xformers.ops' has no attribute 'AttentionOpDispatch'
#1129
LarsDoorenbos
closed
1 month ago
1
xformers 0.0.20 memoery efficient attent cutalss backward non deterministic
#1128
Bavesh-B
opened
1 month ago
0
[fix] Fix the activation checkpointing when using SwiGLUPackedFusedOp
#1127
warpuv
closed
3 weeks ago
8
Activation checkpointing is not working on SwiGLU
#1126
warpuv
closed
3 weeks ago
0
xformers.sparse.utils._coo_to_csr is incorrect when n > m
#1125
francois-rozet
opened
1 month ago
0
Incorrect attention output with SparseCS mask
#1124
francois-rozet
opened
1 month ago
1
Add support for CUDNN attention via CUDNN_FRONTEND Python API?
#1123
Skylion007
opened
1 month ago
0
[Bug] Unexpected behavior of `memory_efficient_attention` with `BlockDiagonalMask`
#1122
xiangxu-google
opened
1 month ago
1
Are there any version for Torch 2.5 including dev? like 0.0.29.dev921?
#1121
FurkanGozukara
closed
1 week ago
6
Does memory efficient attention cutlass kernel support various seq len inputs for q/k/v + tensor bias?
#1120
ShijunK
opened
1 month ago
0
Documentation update FMHA __init__.py
#1119
lhallee
opened
1 month ago
2
Why xformers 0.0.28.post1 doesn't have pre-compiled wheel for Windows?
#1118
FurkanGozukara
opened
1 month ago
7
I tried upgrade visual studio,cuda 12.4 and 12.6,but it still can't build the wheel,and return this
#1117
neutronslime
opened
1 month ago
5
I cannot run xformers on an AMD Radeon RX 6600 with torch 2.4.1
#1116
renich
closed
1 month ago
2
scaled_dot_product_attention output is different from memory_efficient_attention
#1114
aenoca
opened
1 month ago
1
Enabling softcap option
#1113
SpyrosMouselinos
closed
1 month ago
1
CUTLASS Fused multi head attention
#1112
yoon5862
opened
1 month ago
2
what version about torch in FLUX bf16?
#1111
Zhuangvictor0
closed
1 month ago
1
Next