Open ChargedAtol338 opened 1 year ago
Having this issue as well after update to Sonoma.
I also had the same problem after updating to macOS 14. My current workaround is to launch with additional --no-half
flag
./webui.sh --no-half
That was suggestion that i saw on this thread: https://github.com/apple/ml-stable-diffusion/issues/192
Does --no-half
have a negative impact on performance in terms of speed?
I cannot check this as i don't have a second machine with older macOS version, and i didn't perform benchmark on older version of macOS. But I did some speed benchmark now to determine what sampler has the best speed. It was made with ./webui.sh --no-half
startup command
Getting same issue in macos 14, even using --no-half, inpaint is no longer working :(
File "/Users/xxx/Desktop/learn-sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/Users/xxx/Desktop/learn-sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1429, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "/Users/xxx/Desktop/learn-sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1239, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "/Users/xxx/Desktop/learn-sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components/image.py", line 270, in preprocess
assert isinstance(x, dict)
AssertionError
To make it work I also did a full cleanup on the repository with the command git clean -fdx
, but remember to backup first output images, as i forgot about this :P. I'm currently working on commit 5ef669de080814067961f28357256e8fe27544f4
Someone suggested I try upgrading torch to the dev nightly. That seemed to fix the Sonoma 14 issues for me by itself, at least so far . . .
pip3 install --upgrade --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu
Maybe not related, but I am using Apple Silicon Mac with the latest OS, and seeing this error in the logs. I just installed Stable Diffusion, so it's a fresh build on the Mac that was done 2 days ago from the latest Git build.
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Not related. xformers is a CUDA thing. CUDA things have no meaning for Macs. The "warnings" can be safely ignored.
To make it work I also did a full cleanup on the repository with the command
git clean -fdx
, but remember to backup first output images, as i forgot about this :P. I'm currently working on commit 5ef669d
it didn't resolve anything on M1 iMac
my webui-user.sh, solve everything on M1 Max MacBook Pro with macOS 14
#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################
# Install directory without trailing slash
#install_dir="/home/$(whoami)"
# Name of the subdirectory
#clone_dir="stable-diffusion-webui"
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate"
# python3 executable
#python_cmd="python3"
# git executable
#export GIT="git"
# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
#venv_dir="venv"
# script to launch to start the app
#export LAUNCH_SCRIPT="launch.py"
# install command for torch
export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231012 torchvision==0.17.0.dev20231012 --index-url https://download.pytorch.org/whl/nightly/cpu"
# Requirements file to use for stable-diffusion-webui
#export REQS_FILE="requirements_versions.txt"
# Fixed git repos
#export K_DIFFUSION_PACKAGE=""
#export GFPGAN_PACKAGE=""
# Fixed git commits
#export STABLE_DIFFUSION_COMMIT_HASH=""
#export CODEFORMER_COMMIT_HASH=""
#export BLIP_COMMIT_HASH=""
# Uncomment to enable accelerated launch
#export ACCELERATE="True"
# Uncomment to disable TCMalloc
#export NO_TCMALLOC="True"
###########################################
remember use
./webui.sh --reinstall-torch
my webui-user.sh, solve everything on M1 Max MacBook Pro with macOS 14
#!/bin/bash ######################################################### # Uncomment and change the variables below to your need:# ######################################################### # Install directory without trailing slash #install_dir="/home/$(whoami)" # Name of the subdirectory #clone_dir="stable-diffusion-webui" # Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention" export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate" # python3 executable #python_cmd="python3" # git executable #export GIT="git" # python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv) #venv_dir="venv" # script to launch to start the app #export LAUNCH_SCRIPT="launch.py" # install command for torch export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231012 torchvision==0.17.0.dev20231012 --index-url https://download.pytorch.org/whl/nightly/cpu" # Requirements file to use for stable-diffusion-webui #export REQS_FILE="requirements_versions.txt" # Fixed git repos #export K_DIFFUSION_PACKAGE="" #export GFPGAN_PACKAGE="" # Fixed git commits #export STABLE_DIFFUSION_COMMIT_HASH="" #export CODEFORMER_COMMIT_HASH="" #export BLIP_COMMIT_HASH="" # Uncomment to enable accelerated launch #export ACCELERATE="True" # Uncomment to disable TCMalloc #export NO_TCMALLOC="True" ###########################################
remember use
./webui.sh --reinstall-torch
it did the trick
yes the trick works but I noticed that it also works with versions of Torch not in preview and which are more stable: PyTorch(2.1.0) works perfectly in my case
my webui-user.sh, solve everything on M1 Max MacBook Pro with macOS 14
#!/bin/bash ######################################################### # Uncomment and change the variables below to your need:# ######################################################### # Install directory without trailing slash #install_dir="/home/$(whoami)" # Name of the subdirectory #clone_dir="stable-diffusion-webui" # Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention" export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate" # python3 executable #python_cmd="python3" # git executable #export GIT="git" # python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv) #venv_dir="venv" # script to launch to start the app #export LAUNCH_SCRIPT="launch.py" # install command for torch export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231012 torchvision==0.17.0.dev20231012 --index-url https://download.pytorch.org/whl/nightly/cpu" # Requirements file to use for stable-diffusion-webui #export REQS_FILE="requirements_versions.txt" # Fixed git repos #export K_DIFFUSION_PACKAGE="" #export GFPGAN_PACKAGE="" # Fixed git commits #export STABLE_DIFFUSION_COMMIT_HASH="" #export CODEFORMER_COMMIT_HASH="" #export BLIP_COMMIT_HASH="" # Uncomment to enable accelerated launch #export ACCELERATE="True" # Uncomment to disable TCMalloc #export NO_TCMALLOC="True" ###########################################
remember use
./webui.sh --reinstall-torch
i love you, perfectly worked
export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231012 torchvision==0.17.0.dev20231012 --index-url https://download.pytorch.org/whl/nightly/cpu"
export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231021 torchvision==0.17.0.dev20231021 --index-url https://download.pytorch.org/whl/nightly/cpu"
version 20231021 works for me, thank you
export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231012 torchvision==0.17.0.dev20231012 --index-url https://download.pytorch.org/whl/nightly/cpu"
export TORCH_COMMAND="pip install --pre torch==2.2.0.dev20231021 torchvision==0.17.0.dev20231021 --index-url https://download.pytorch.org/whl/nightly/cpu"
version 20231021 works for me, thank you
Tried this today and those versions were not listed. I had to use pip install --pre torch==2.3.0.dev20231220 torchvision==0.18.0.dev20231220 --index-url https://download.pytorch.org/whl/nightly/cpu
I'm just getting MPS memory allocation errors out the wazoo, no matter what torch I use.
RuntimeError: MPS backend out of memory (MPS allocated: 12.98 GB, other allocations: 2.00 GB, max allowed: 18.13 GB). Tried to allocate 4.00 GB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
This is with suggestions above and the new nightly, export TORCH_COMMAND="pip install --pre torch==2.3.0.dev20240115 torchvision==0.18.0.dev20240115 --index-url https://download.pytorch.org/whl/nightly/cpu
Bit confused as to what to do. This fails no matter what on my M1 MacBook Air 16GB, but works flawlessly (without having to change to a dev torch) on an M2 Mac mini 16GB. I feel it should be easier than this 🤣 but sadly, no.
Is there an existing issue for this?
What happened?
When I try to execute the generation of the image, the terminal say error on Neural Engine, this error appeared with the new MACOS update, MacOs Sonoma 14
Steps to reproduce the problem
Generate image on Mac with stable diffusion
What should have happened?
The generation of the image
Sysinfo
sysinfo-2023-09-27-15-15.txt
What browsers do you use to access the UI ?
Apple Safari
Console logs
Additional information
I was quietly opening Stable as I do every day but following the update to MacOs Sonoma, in addition to constantly freezing and making the Mac lag, it doesn't work