Closed papersplease closed 2 years ago
Can you please test removing the try block and just importing without checks in modules/sd_hijack_optimizations.py
?
Remove the cmd_opts check block entirely and input this:
import xformers.ops
import functorch
xformers._is_functorch_available = True
shared.xformers_available = True
This will likely fail as well but it'll output a more useful error.
Remove the cmd_opts check block entirely and input this:
import xformers.ops import functorch xformers._is_functorch_available = True shared.xformers_available = True
It throws the ModuleNotFoundError: No module named 'xformers'
in import xformers.ops
. I take it I botched the installation somehow...
Are you using venv? If yes, you need to install xformers inside venv.
Yes I'm using venv and installed xformers inside it; the library compiled as intended, and the binary is physically at the intended path inside venv. I'll try to clean it up and recompile.
Turns out the leftovers from the previous failed compilation somehow prevented the normal functioning, it's working now.
Turns out the leftovers from the previous failed compilation somehow prevented the normal functioning, it's working now.
can you elaborate on how to "clean it up and recompile" plz? i have the exact same error :c
Turns out the leftovers from the previous failed compilation somehow prevented the normal functioning, it's working now.
i have this problem too,how to "clean it up and recompile"?
Turns out the leftovers from the previous failed compilation somehow prevented the normal functioning, it's working now.
i have this problem too,how to "clean it up and recompile"?
i solved it, my issue was that the program didnt built the xformers at all, and the reason was bcuz i somehow installed automatic111 with python3.8, i noticed this cuz i erased the venv folder to re-do the whole xformers installation process, which then gave me this error: The detected CUDA version (11.8) mismatches the version that was used to compile PyTorch (11.3). Please make sure to use the same CUDA versions.
so after swaping my default python from 3.8 to 3.10 and reinstalling the whole automatic111webui, i was finally able to install the xformers with no issues
Launching Web UI with arguments: --force-enable-xformers
Cannot import xformers
Traceback (most recent call last):
File "Z:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 18, in <module>
import xformers.ops
ModuleNotFoundError: No module named 'xformers'
I am getting the same error.
It compiles, produces a xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
, which install
s using pip without any problems.
How do I even check what's wrong here?
Rather than manually installing each missing dependency one by one, this is the 'proper' and more sustainable way to fix any missing dependency issues:
You can modify your
run_webui_mac.sh
to add a line that tries to install the required dependencies withpip install -r requirements.txt
each time it starts, after it pulls the latest code. This would probably make a sensible default, and is fairly quick to run, so it might be worth someone telling the original author to include it in their setup script to avoid this sort of error for end users in future:Here is what mine looks like:
#!/usr/bin/env bash -l pyenv local anaconda3-2022.05 # This should not be needed since it's configured during installation, but might as well have it here. conda env config vars set PYTORCH_ENABLE_MPS_FALLBACK=1 # Activate conda environment conda activate web-ui # Pull the latest changes from the repo git pull --rebase + # Update the dependencies if needed + pip install -r requirements.txt # Run the web ui python webui.py --deepdanbooru --precision full --no-half --use-cpu Interrogate GFPGAN CodeFormer $@ # Deactivate conda environment conda deactivate
Originally posted by @0xdevalias in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4109#issuecomment-1304747007
For the others who said that
pip install -r requirements.txt
didn't work for them, or finding that despitepip install
ing the individual requirements they still don't seem to 'be there', it might be an issue with yourconda
environment, and whichpip
is being used. Sometimes the version you're calling doesn't actually install the packages to the correct place, and so they can't be found later.I have a new theory for you, based on this StackOverflow:
What do you see when you activate your conda environment, then run
which -a pip
?If it's only something like:
/opt/conda/bin/pip
And not something like:
/opt/conda/envs/web-ui/bin/pip /opt/conda/bin/pip
Then you can likely fix it by either doing a
conda install pip
then usingpip
as normal, or usingpython -m pip install FOO
instead of usingpip
directly.Originally posted by @0xdevalias in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4109#issuecomment-1308357941
I did a
conda install pip
and then usedpython -m
, it looked like it wasn't going to work but then I changed my run script a tiny bit,Curious, what made you think it wasn't going to work? And what was the specific change to the run script that made it work for you? Was it using
requirements_versions.txt
rather thanrequirements.txt
?python -m pip install -r requirements_versions.txt
Originally posted by @0xdevalias in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4109#issuecomment-1309356932
Hopefully the above helps people who are still running into this issue 🖤
Originally posted by @0xdevalias in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/4061#issuecomment-1314398808
Meet same problem
Describe the bug While trying to generate an image with --force-enable-xformers on GTX970, I'm getting the error:
NameError: name 'xformers' is not defined
Full traceback is here: https://gist.githubusercontent.com/papersplease/d60b889881b27e6ebc5484707ad0fd9e/raw/f6cb1d789bc82e85b1926da582c0568c5ad68d90/gistfile1.txt
To Reproduce
TORCH_CUDA_ARCH_LIST=5.2
for it to work)Desktop: