Open zhan-cn opened 1 year ago
Having the same issue, RTX 3090.
Same here, RTX 3080.
same
Same issue, 1660-ti. I've tried using pip install
, I've tried manually building, I've tried Xformers (windows installation, wiki), I've tried the xformers re-install argument, nothing's working.
Problem appears to have been resolved by following the steps in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6871#issuecomment-1416400288. Just delete venv folder and run webui with --xformers.
I've gone from getting 2.1s/it to 1.75s/it thanks to Xformers.
Having the same issue, win10,RTX 3060,cuda11.1.
appears to have been resolved by following the steps in #6871 (comment). Just delete venv folder and run webui with --xformers
I deleted the xformers site-packages from the venv and pip installed xformers==0.0.16 but when I ran the webui, it just installed the venv xformers back. xformers-0.0.16rc425.dist-info
Build cuda_11.8.r11.8/compiler.31833905_0 RTX 3070
Same here, ubuntu18.04, RTX 3080 Ti, cuda12.1
I met this issue too, in my case, I pull new code and lauch webui, and new launch.py install xformer again, which version is xformers==0.0.16rc425 and not compiled with my cuda version, I just uninstall it and install xformer from source again, everything goes fine now. I also comment out the line run_pip(f"install -r \"{requirements_file}\"", "requirements for Web UI")
in launch.py.
Also a good advice would be to check everything goes on one python version. I had some difference between the PIP path which led to a 3.9v and the Py path which was 3.10
@zoezhu God bless, it's successful
I solved it by temporarily removing the --xformers flag. I'm penalized in speed, but so what.
I solved it by temporarily removing the --xformers flag. I'm penalized in speed, but so what.
sorry what did you remove? can you elaborate
args
when you run webui.bat you have flags, command arguments, such as --no-half, or in many cases --xformers instructing the use of python lib xformers. so he launched it without the work of that lib.
This also happens with Apple Silicon(M1 max, Ventura 13.3.1(22E261))
In my case, I set the version of xformers to 0.0.16rc425 in launch.py (line 228). And it seems to work.
Im running the vladmandic/automatic version. Getting some requirements mismatches I had to adjust just to get the program to run, between versions for python/torch/torchvision/xformers intercompatibility. Getting similar error. Ubuntu 22.02 Ryzen5800x RTX3090
xformers installed: ubuntu-22.04-py3.10-torch2.0.0+cu118
Launching launch.py...
14:32:14-702100 INFO Starting SD.Next
14:32:14-704433 INFO Python 3.10.10 on Linux
14:32:14-724191 INFO Version: 99dc75c0 Fri May 5 09:28:44 2023 -0400
14:32:15-339630 INFO Latest published version: f6898c9aec9c8b40b55de52e1bf1b4b83028897d 2023-05-05T17:40:53Z
14:32:15-341186 INFO Setting environment tuning
14:32:15-342024 INFO nVidia CUDA toolkit detected
14:32:16-086818 INFO Torch 2.0.1+cu118
14:32:16-096932 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700
14:32:16-107578 INFO Torch detected GPU: NVIDIA GeForce RTX 3090 VRAM 24257 Arch (8, 6) Cores 82
...Blah blah, xformers loaded, start to generate image (shows in preview) and this poops out...
`NotImplementedError: No operator found for
memory_efficient_attention_forwardwith inputs: query : shape=(1, 4096, 1, 512) (torch.float16) key : shape=(1, 4096, 1, 512) (torch.float16) value : shape=(1, 4096, 1, 512) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0
cutlassFis not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see
python -m xformers.infofor more info
flshattFis not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 Operator wasn't built - see
python -m xformers.infofor more info
tritonflashattFis not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 requires A100 GPU
smallkFis not supported because: xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see
python -m xformers.info` for more info
unsupported embed per head: 512
Guess I shoulda but the A00? Im gonna try and build it, but am not sure how to activate the correct python venv for the project...
Problem appears to have been resolved by following the steps in #6871 (comment). Just delete venv folder and run webui with --xformers.
I've gone from getting 2.1s/it to 1.75s/it thanks to Xformers.
hello, where is venv directory?
hello, where is venv directory
look in the stable-diffusion-webui dir for "venv"
I try pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
. It works!
hi every one i fix the issue for LINUX USERS BY EDIT THIS FILE "stable-diffusion-webui/modules/launch_utils.py" and edit this line "xformers_package" to latest xformers package as below > xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.21.dev543') and relaunch the "./webui.sh --xformers" also windows users can try this way try it before delete venv folder if it doesn't work rename or delete venv and relunch webui with xformers flag
How do I resolve this error? I uninstalled it and installed it again, but it doesn't solve the problem.
NotImplementedError: No operator found for memory_efficient_attention_forward
with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 flshattF
is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info
for more info tritonflashattF
is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info
for more info triton is not available requires A100 GPU cutlassF
is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info
for more info smallkF
is not supported because: xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 Operator wasn't built - see python -m xformers.info
for more info unsupported embed per head: 40
Same here, RTX 3090 windows 10
I have the same problem, 7900xtx, Fedora Linux
same prblem rtx 3090, the problem occured for me since the beginning of sdxl
I used Google Colab and tried !pip install --pre -U xformers
.
it works!
I used Google Colab and tried
!pip install --pre -U xformers
. it works!
Thanks, that worked!
Windows 11, gtx 1660s, r5 5600, 16gb, same problem
win11, PC (not colab) install SD, same error, what is the solution ? delete venv folder and run webui with --xformers ?
I uninstall xformers and then install it solving the problem.
worked in kaggle platform !pip install --pre -U xformers
try this pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
If anyone still has issues with xformers
in MacOS, here is what I did:
--xformers
in COMMANDLINE_ARGS
in webui-user.sh
venv
with rm -rf venv
webui.sh
with this command (using llvm from brew):brew install llvm
CC=/usr/local/opt/llvm/bin/clang CXX=/usr/local/opt/llvm/bin/clang++ ./webui.sh
xformers
will be installed on the webui.sh
launchHowever, I think this will not work without CUDA. I'm looking if there's any alternatives to make it work for MPS.
Is there an existing issue for this?
What happened?
when I run .\webui.bat --xformers or .\webui.bat --xformers --no-half --medvram,meet bug : NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs:Steps to reproduce the problem
1 .\webui.bat --xformers --no-half --medvram 2 login in http://127.0.0.1:7860/ 3 choose jpg,then generate
What should have happened?
generate jgeg
Commit where the problem happens
.\webui.bat --xformers --no-half --medvram
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
List of extensions
no
Console logs
Additional information
have been rebulid xformers,I think maybe i use gtx 1650 4G