AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.92k stars 27.06k forks source link

[Bug]: when selecting some models #7668

Closed Myuless closed 1 year ago

Myuless commented 1 year ago

Is there an existing issue for this?

What happened?

A week ago everything was fine, but now I decided to use it and it gives out this can anyone suggest what the problem? Stabble.txt

Steps to reproduce the problem

  1. I run webui
  2. I enter through the Yandex browser to the ip address that appears
  3. I am trying to run the wd-v1-3-full-opt model 4.and generates an error

What should have happened?

The model should run

Commit where the problem happens

http://127.0.0.1:7860/

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

no

List of extensions

tag-autocomplete | https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git | LDSR | built-in |   ScuNET | built-in |   SwinIR | built-in |   prompt-bracket-checker | built-in ### Console logs ```Shell venv "F:\Ai\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (main, Sep 19 2022, 15:45:37) [MSC v.1933 64 bit (AMD64)] Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490 Installing requirements for Web UI Launching Web UI with arguments: No module 'xformers'. Proceeding without it. LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Loading weights [e3f7fa29] from F:\Ai\stable-diffusion-webui\models\Stable-diffusion\yiffymix_.safetensors Applying cross attention optimization (Doggettx). Model loaded. Loaded a total of 0 textual inversion embeddings. Embeddings: Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Loading weights [3e1a125f] from F:\Ai\stable-diffusion-webui\models\Stable-diffusion\wd-v1-3-full-opt.ckpt Error verifying pickled file from F:\Ai\stable-diffusion-webui\models\Stable-diffusion\wd-v1-3-full-opt.ckpt: Traceback (most recent call last): File "F:\Ai\stable-diffusion-webui\modules\safe.py", line 135, in load_with_extra check_pt(filename, extra_handler) File "F:\Ai\stable-diffusion-webui\modules\safe.py", line 93, in check_pt unpickler.load() File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\_utils.py", line 138, in _rebuild_tensor_v2 tensor = _rebuild_tensor(storage, storage_offset, size, stride) File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor return t.set_(storage._untyped(), storage_offset, size, stride) RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes. The file may be malicious, so the program is not going to read it. You can skip this check with --disable-safe-unpickle commandline argument. Traceback (most recent call last): File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict output = await app.blocks.process_api( File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "F:\Ai\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "F:\Ai\stable-diffusion-webui\modules\ui.py", line 1642, in fn=lambda value, k=k: run_settings_single(value, key=k), File "F:\Ai\stable-diffusion-webui\modules\ui.py", line 1483, in run_settings_single if not opts.set(key, value): File "F:\Ai\stable-diffusion-webui\modules\shared.py", line 474, in set self.data_labels[key].onchange() File "F:\Ai\stable-diffusion-webui\modules\call_queue.py", line 15, in f res = func(*args, **kwargs) File "F:\Ai\stable-diffusion-webui\webui.py", line 63, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights())) File "F:\Ai\stable-diffusion-webui\modules\sd_models.py", line 350, in reload_model_weights load_model_weights(sd_model, checkpoint_info) File "F:\Ai\stable-diffusion-webui\modules\sd_models.py", line 195, in load_model_weights sd = read_state_dict(checkpoint_file) File "F:\Ai\stable-diffusion-webui\modules\sd_models.py", line 177, in read_state_dict sd = get_state_dict_from_checkpoint(pl_sd) File "F:\Ai\stable-diffusion-webui\modules\sd_models.py", line 151, in get_state_dict_from_checkpoint pl_sd = pl_sd.pop("state_dict", pl_sd) AttributeError: 'NoneType' object has no attribute 'pop' ``` ### Additional information _No response_
Skeula commented 1 year ago

Error verifying pickled file from F:\Ai\stable-diffusion-webui\models\Stable-diffusion\wd-v1-3-full-opt.ckpt: The file may be malicious, so the program is not going to read it. You can skip this check with --disable-safe-unpickle commandline argument.

So that's your problem. Either you need to find a safetensor version, or you need to convert it (which could in theory expose you to malware) or you need to add the command line mentioned there (which will let the webui load it, which again can in theory expose you to malware).

CorneVgit commented 1 year ago

The issue is the Out Of Memory error.

RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.

It can't allocate enough memory for the model. The model you're trying to load is the Float 32 Full Weights + Optimizer Weights (For Training) version of Waifu Diffusion 1.3, which is 14GB in size and as it says meant for training. Using the --medvram option might help depending on the amount of VRAM your GPU has. My recommendation would be to use the float16 or float32 version instead if you're just trying to generate images.