AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.52k stars 27.02k forks source link

[Bug]: AttributeError: 'NoneType' object has no attribute 'lowvram' -- Clean install on Mac #15637

Open ghost opened 7 months ago

ghost commented 7 months ago

Checklist

What happened?

On clean install, selecting a downloaded model or preloaded v1-5 model will result in a AttributeError.

Terminal: e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053 Loading weights [e1441589a6] from /Users/[obfuscated]/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt Creating model from config: /Users/[obfuscated]/stable-diffusion-webui/configs/v1-inference.yaml changing setting sd_model_checkpoint to v1-5-pruned.ckpt: AttributeError Traceback (most recent call last): File "/Users/[obfuscated]/stable-diffusion-webui/modules/options.py", line 165, in set option.onchange() File "/Users/[obfuscated]/stable-diffusion-webui/modules/call_queue.py", line 13, in f res = func(*args, **kwargs) File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda> shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False) File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer) File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded send_model_to_cpu(sd_model) File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu if m.lowvram: AttributeError: 'NoneType' object has no attribute 'lowvram'

Steps to reproduce the problem

Upon clean install and webui launch, attempt to select the v1-5 pruned ckpt file.

What should have happened?

A model should be able to be selected, and generation should be able to proceed.

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

{ "Platform": "macOS-12.1-arm64-arm-64bit", "Python": "3.10.14", "Version": "v1.9.3", "Commit": "1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0", "Script path": "/Users/[obfuscated]/stable-diffusion-webui", "Data path": "/Users/[obfuscated]/stable-diffusion-webui", "Extensions dir": "/Users/[obfuscated]/stable-diffusion-webui/extensions", "Checksum": "d56275202269240dd6f316f3de94fd6195326487d0a53de5de030e8cc3084cb7", "Commandline": [ "launch.py", "--skip-torch-cuda-test", "--upcast-sampling", "--no-half-vae", "--use-cpu", "interrogate" ], "Torch env info": { "torch_version": "2.1.0", "is_debug_build": "False", "cuda_compiled_version": null, "gcc_version": null, "clang_version": "13.1.6 (clang-1316.0.21.2.5)", "cmake_version": "version 3.29.2", "os": "macOS 12.1 (arm64)", "libc_version": "N/A", "python_version": "3.10.14 (main, Mar 20 2024, 03:57:45) [Clang 14.0.0 (clang-1400.0.29.202)] (64-bit runtime)", "python_platform": "macOS-12.1-arm64-arm-64bit", "is_cuda_available": "False", "cuda_runtime_version": null, "cuda_module_loading": "N/A", "nvidia_driver_version": null, "nvidia_gpu_models": null, "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.26.2", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==2.1.0", "torchdiffeq==0.2.3", "torchmetrics==1.3.2", "torchsde==0.2.6", "torchvision==0.16.0" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "", "is_xnnpack_available": "True", "cpu_info": "Apple M1 Pro" }, "Exceptions": [ { "exception": "Torch not compiled with CUDA enabled", "traceback": [ [ "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 620, get_sd_model", "load_model()" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 770, load_model", "with devices.autocast(), torch.no_grad():" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py, line 218, autocast", "if has_xpu() or has_mps() or cuda_no_autocast():" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py, line 28, cuda_no_autocast", "device_id = get_cuda_device_id()" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py, line 40, get_cuda_device_id", ") or torch.cuda.current_device()" ], [ "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py, line 769, current_device", "_lazy_init()" ], [ "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py, line 289, _lazy_init", "raise AssertionError(\"Torch not compiled with CUDA enabled\")" ] ] }, { "exception": "'NoneType' object has no attribute 'lowvram'", "traceback": [ [ "/Users/[obfuscated]/stable-diffusion-webui/modules/options.py, line 165, set", "option.onchange()" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/call_queue.py, line 13, f", "res = func(*args, **kwargs)" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize_util.py, line 181, ", "shared.opts.onchange(\"sd_model_checkpoint\", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 860, reload_model_weights", "sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 793, reuse_model_from_already_loaded", "send_model_to_cpu(sd_model)" ], [ "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py, line 662, send_model_to_cpu", "if m.lowvram:" ] ] } ], "CPU": { "model": "arm", "count logical": 10, "count physical": 10 }, "RAM": { "total": "16GB", "used": "5GB", "free": "62MB", "active": "3GB", "inactive": "3GB" }, "Extensions": [], "Inactive extensions": [], "Environment": { "COMMANDLINE_ARGS": "--skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate", "GIT": "git", "GRADIO_ANALYTICS_ENABLED": "False", "TORCH_COMMAND": "pip install torch==2.1.0 torchvision==0.16.0" }, "Config": { "ldsr_steps": 100, "ldsr_cached": false, "SCUNET_tile": 256, "SCUNET_tile_overlap": 8, "SWIN_tile": 192, "SWIN_tile_overlap": 8, "SWIN_torch_compile": false, "hypertile_enable_unet": false, "hypertile_enable_unet_secondpass": false, "hypertile_max_depth_unet": 3, "hypertile_max_tile_unet": 256, "hypertile_swap_size_unet": 3, "hypertile_enable_vae": false, "hypertile_max_depth_vae": 3, "hypertile_max_tile_vae": 128, "hypertile_swap_size_vae": 3, "sd_model_checkpoint": "v1-5-pruned.ckpt [e1441589a6]", "sd_checkpoint_hash": "e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053" }, "Startup": { "total": 68.00557136535645, "records": { "initial startup": 0.0009272098541259766, "prepare environment/checks": 4.220008850097656e-05, "prepare environment/git version info": 0.018723011016845703, "prepare environment/install torch": 13.662177085876465, "prepare environment/torch GPU test": 6.175041198730469e-05, "prepare environment/install clip": 3.7877581119537354, "prepare environment/install open_clip": 4.085432052612305, "prepare environment/clone repositores": 7.612929821014404, "prepare environment/install requirements": 29.78075909614563, "prepare environment/run extensions installers": 0.004931211471557617, "prepare environment": 58.95587396621704, "launcher": 0.022570133209228516, "import torch": 4.146008729934692, "import gradio": 0.765498161315918, "setup paths": 1.2596769332885742, "import ldm": 0.013821840286254883, "import sgm": 5.245208740234375e-06, "initialize shared": 0.3825209140777588, "other imports": 1.01145601272583, "opts onchange": 0.00033593177795410156, "setup SD model": 6.604194641113281e-05, "setup codeformer": 0.003963947296142578, "setup gfpgan": 0.010995149612426758, "set samplers": 3.886222839355469e-05, "list extensions": 0.0009171962738037109, "restore config state file": 8.821487426757812e-06, "list SD models": 0.008134841918945312, "list localizations": 0.00017118453979492188, "load scripts/custom_code.py": 0.002298116683959961, "load scripts/img2imgalt.py": 0.0015780925750732422, "load scripts/loopback.py": 0.0011126995086669922, "load scripts/outpainting_mk_2.py": 0.002089977264404297, "load scripts/poor_mans_outpainting.py": 0.0015411376953125, "load scripts/postprocessing_codeformer.py": 0.0005950927734375, "load scripts/postprocessing_gfpgan.py": 0.0011141300201416016, "load scripts/postprocessing_upscale.py": 0.0018849372863769531, "load scripts/prompt_matrix.py": 0.001984834671020508, "load scripts/prompts_from_file.py": 0.0018491744995117188, "load scripts/sd_upscale.py": 0.0013020038604736328, "load scripts/xyz_grid.py": 0.008707761764526367, "load scripts/ldsr_model.py": 0.3379373550415039, "load scripts/lora_script.py": 0.1310436725616455, "load scripts/scunet_model.py": 0.016871929168701172, "load scripts/swinir_model.py": 0.02359914779663086, "load scripts/hotkey_config.py": 0.000881195068359375, "load scripts/extra_options_section.py": 0.0009827613830566406, "load scripts/hypertile_script.py": 0.04871392250061035, "load scripts/hypertile_xyz.py": 0.0001862049102783203, "load scripts/postprocessing_autosized_crop.py": 0.0010879039764404297, "load scripts/postprocessing_caption.py": 0.0004470348358154297, "load scripts/postprocessing_create_flipped_copies.py": 0.00043702125549316406, "load scripts/postprocessing_focal_crop.py": 0.0026140213012695312, "load scripts/postprocessing_split_oversized.py": 0.0008080005645751953, "load scripts/soft_inpainting.py": 0.0022139549255371094, "load scripts/comments.py": 0.01715993881225586, "load scripts/refiner.py": 0.002248048782348633, "load scripts/sampler.py": 0.0008349418640136719, "load scripts/seed.py": 0.0009102821350097656, "load scripts": 0.6150598526000977, "load upscalers": 0.0033631324768066406, "refresh VAE": 0.0006058216094970703, "refresh textual inversion templates": 0.0002219676971435547, "scripts list_optimizers": 0.0002779960632324219, "scripts list_unets": 1.3113021850585938e-05, "reload hypernetworks": 0.00030112266540527344, "initialize extra networks": 0.006253719329833984, "scripts before_ui_callback": 0.0002532005310058594, "create ui": 0.23605775833129883, "gradio launch": 0.5480811595916748, "add APIs": 0.016994953155517578, "app_started_callback/lora_script.py": 0.0005769729614257812, "app_started_callback": 0.000576019287109375 } }, "Packages": [ "accelerate==0.21.0", "aenum==3.1.15", "aiofiles==23.2.1", "aiohttp==3.9.5", "aiosignal==1.3.1", "altair==5.3.0", "antlr4-python3-runtime==4.9.3", "anyio==3.7.1", "async-timeout==4.0.3", "attrs==23.2.0", "blendmodes==2022", "certifi==2024.2.2", "charset-normalizer==3.3.2", "clean-fid==0.1.35", "click==8.1.7", "clip==1.0", "contourpy==1.2.1", "cycler==0.12.1", "deprecation==2.1.0", "diskcache==5.6.3", "einops==0.4.1", "exceptiongroup==1.2.1", "facexlib==0.3.0", "fastapi==0.94.0", "ffmpy==0.3.2", "filelock==3.13.4", "filterpy==1.4.5", "fonttools==4.51.0", "frozenlist==1.4.1", "fsspec==2024.3.1", "ftfy==6.2.0", "gitdb==4.0.11", "gitpython==3.1.32", "gradio-client==0.5.0", "gradio==3.41.2", "h11==0.12.0", "httpcore==0.15.0", "httpx==0.24.1", "huggingface-hub==0.22.2", "idna==3.7", "imageio==2.34.1", "importlib-resources==6.4.0", "inflection==0.5.1", "jinja2==3.1.3", "jsonmerge==1.8.0", "jsonschema-specifications==2023.12.1", "jsonschema==4.21.1", "kiwisolver==1.4.5", "kornia==0.6.7", "lark==1.1.2", "lazy-loader==0.4", "lightning-utilities==0.11.2", "llvmlite==0.42.0", "markupsafe==2.1.5", "matplotlib==3.8.4", "mpmath==1.3.0", "multidict==6.0.5", "networkx==3.3", "numba==0.59.1", "numpy==1.26.2", "omegaconf==2.2.3", "open-clip-torch==2.20.0", "opencv-python==4.9.0.80", "orjson==3.10.1", "packaging==24.0", "pandas==2.2.2", "piexif==1.1.3", "pillow-avif-plugin==1.4.3", "pillow==9.5.0", "pip==24.0", "protobuf==3.20.0", "psutil==5.9.5", "pydantic==1.10.15", "pydub==0.25.1", "pyparsing==3.1.2", "python-dateutil==2.9.0.post0", "python-multipart==0.0.9", "pytorch-lightning==1.9.4", "pytz==2024.1", "pywavelets==1.6.0", "pyyaml==6.0.1", "referencing==0.35.0", "regex==2024.4.16", "requests==2.31.0", "resize-right==0.0.2", "rpds-py==0.18.0", "safetensors==0.4.2", "scikit-image==0.21.0", "scipy==1.13.0", "semantic-version==2.10.0", "sentencepiece==0.2.0", "setuptools==69.2.0", "six==1.16.0", "smmap==5.0.1", "sniffio==1.3.1", "spandrel==0.1.6", "starlette==0.26.1", "sympy==1.12", "tifffile==2024.4.24", "timm==0.9.16", "tokenizers==0.13.3", "tomesd==0.1.3", "toolz==0.12.1", "torch==2.1.0", "torchdiffeq==0.2.3", "torchmetrics==1.3.2", "torchsde==0.2.6", "torchvision==0.16.0", "tqdm==4.66.2", "trampoline==0.1.2", "transformers==4.30.2", "typing-extensions==4.11.0", "tzdata==2024.1", "urllib3==2.2.1", "uvicorn==0.29.0", "wcwidth==0.2.13", "websockets==11.0.3", "yarl==1.9.4" ] }

Console logs

Last login: Fri Apr 26 12:46:05 on ttys002
[obfuscated]@binhyboy-M1-Pro ~ % cd stable-diffusion-webui/
[obfuscated]@binhyboy-M1-Pro stable-diffusion-webui % ./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on [obfuscated] user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.10.14 (main, Mar 20 2024, 03:57:45) [Clang 14.0.0 (clang-1400.0.29.202)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Installing torch and torchvision
Collecting torch==2.1.0
  Using cached torch-2.1.0-cp310-none-macosx_11_0_arm64.whl.metadata (24 kB)
Collecting torchvision==0.16.0
  Using cached torchvision-0.16.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (6.6 kB)
Collecting filelock (from torch==2.1.0)
  Using cached filelock-3.13.4-py3-none-any.whl.metadata (2.8 kB)
Collecting typing-extensions (from torch==2.1.0)
  Using cached typing_extensions-4.11.0-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.1.0)
  Using cached sympy-1.12-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.1.0)
  Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch==2.1.0)
  Using cached Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
Collecting fsspec (from torch==2.1.0)
  Using cached fsspec-2024.3.1-py3-none-any.whl.metadata (6.8 kB)
Collecting numpy (from torchvision==0.16.0)
  Using cached numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB)
Collecting requests (from torchvision==0.16.0)
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.16.0)
  Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.2 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.1.0)
  Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision==0.16.0)
  Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision==0.16.0)
  Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision==0.16.0)
  Using cached urllib3-2.2.1-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision==0.16.0)
  Using cached certifi-2024.2.2-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath>=0.19 (from sympy->torch==2.1.0)
  Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Using cached torch-2.1.0-cp310-none-macosx_11_0_arm64.whl (59.5 MB)
Using cached torchvision-0.16.0-cp310-cp310-macosx_11_0_arm64.whl (1.6 MB)
Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB)
Using cached filelock-3.13.4-py3-none-any.whl (11 kB)
Using cached fsspec-2024.3.1-py3-none-any.whl (171 kB)
Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB)
Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
Using cached numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Using cached typing_extensions-4.11.0-py3-none-any.whl (34 kB)
Using cached certifi-2024.2.2-py3-none-any.whl (163 kB)
Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl (120 kB)
Using cached idna-3.7-py3-none-any.whl (66 kB)
Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl (18 kB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached urllib3-2.2.1-py3-none-any.whl (121 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.4 fsspec-2024.3.1 idna-3.7 jinja2-3.1.3 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 pillow-10.3.0 requests-2.31.0 sympy-1.12 torch-2.1.0 torchvision-0.16.0 typing-extensions-4.11.0 urllib3-2.2.1
Installing clip
Installing open_clip
Cloning assets into /Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 1.35 MiB/s, done.
Cloning Stable Diffusion into /Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (306/306), done.
remote: Total 580 (delta 278), reused 446 (delta 247), pack-reused 9
Receiving objects: 100% (580/580), 73.44 MiB | 42.75 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into /Users/[obfuscated]/stable-diffusion-webui/repositories/generative-models...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/generative-models'...
remote: Enumerating objects: 941, done.
remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
Receiving objects: 100% (941/941), 43.85 MiB | 35.95 MiB/s, done.
Resolving deltas: 100% (489/489), done.
Cloning K-diffusion into /Users/[obfuscated]/stable-diffusion-webui/repositories/k-diffusion...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/k-diffusion'...
remote: Enumerating objects: 1340, done.
remote: Counting objects: 100% (1340/1340), done.
remote: Compressing objects: 100% (433/433), done.
remote: Total 1340 (delta 940), reused 1259 (delta 900), pack-reused 0
Receiving objects: 100% (1340/1340), 238.52 KiB | 1.77 MiB/s, done.
Resolving deltas: 100% (940/940), done.
Cloning BLIP into /Users/[obfuscated]/stable-diffusion-webui/repositories/BLIP...
Cloning into '/Users/[obfuscated]/stable-diffusion-webui/repositories/BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
Receiving objects: 100% (277/277), 7.03 MiB | 18.28 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
==============================================================================
You are running torch 2.1.0.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
Calculating sha256 for /Users/[obfuscated]/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 68.0s (prepare environment: 59.0s, import torch: 4.1s, import gradio: 0.8s, setup paths: 1.3s, initialize shared: 0.4s, other imports: 1.0s, load scripts: 0.6s, create ui: 0.2s, gradio launch: 0.5s).
e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053
Loading weights [e1441589a6] from /Users/[obfuscated]/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
Creating model from config: /Users/[obfuscated]/stable-diffusion-webui/configs/v1-inference.yaml
changing setting sd_model_checkpoint to v1-5-pruned.ckpt: AttributeError
Traceback (most recent call last):
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

Applying attention optimization: InvokeAI... done.
loading stable diffusion model: AssertionError
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/sd_models.py", line 770, in load_model
    with devices.autocast(), torch.no_grad():
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 218, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 769, in current_device
    _lazy_init()
  File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Stable diffusion model failed to load
Exception in thread Thread-2 (load_model):
Traceback (most recent call last):
  File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/initialize.py", line 154, in load_model
    devices.first_time_calculation()
  File "/Users/[obfuscated]/stable-diffusion-webui/modules/devices.py", line 267, in first_time_calculation
    linear(x)
  File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/[obfuscated]/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 503, in network_Linear_forward
    return originals.Linear_forward(self, input)
  File "/Users/[obfuscated]/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'

Additional information

MacBook Pro (16-inch, 2021) with Apple M1 Pro, 16GB on macOS Monterey 12.1

viking1304 commented 6 months ago

You need to update your macOS. You need at least 12.3. More info here: https://developer.apple.com/metal/pytorch/

luohuatingyu commented 6 months ago

cause by :no gpu environment or no torch environment available for gpu

base on windows ---> webui-user.bat

set COMMANDLINE_ARGS= --lowvram --skip-torch-cuda-test

not base on windows---> webui-user.sh

export COMMANDLINE_ARGS="--lowvram --skip-torch-cuda-test"