AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
135.34k stars 25.84k forks source link

[Bug]: DAT x2 Error #16015

Open den3asphalt opened 3 weeks ago

den3asphalt commented 3 weeks ago

Checklist

What happened?

Simply put, an error occurs when trying to use DAT_x2 with Hires. fix.

Steps to reproduce the problem

  1. Set models.
  2. Set Prompt.
  3. Select the DAT series in Hires. Fix.
  4. Generate.

What should have happened?

Hires fix works and is upscaled.

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-06-13-21-39.json

Console logs

Creating venv in directory C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv using python "C:\Users\username\AppData\Local\Programs\Python\Python310\python.exe"
venv "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
  Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB)
Collecting torchvision==0.16.2
  Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl (5.6 MB)
Collecting filelock
  Downloading filelock-3.15.1-py3-none-any.whl (15 kB)
Collecting sympy
  Downloading sympy-1.12.1-py3-none-any.whl (5.7 MB)
     ---------------------------------------- 5.7/5.7 MB 40.7 MB/s eta 0:00:00
Collecting typing-extensions
  Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting fsspec
  Downloading fsspec-2024.6.0-py3-none-any.whl (176 kB)
     ---------------------------------------- 176.9/176.9 kB 10.4 MB/s eta 0:00:00
Collecting jinja2
  Downloading jinja2-3.1.4-py3-none-any.whl (133 kB)
     ---------------------------------------- 133.3/133.3 kB 7.7 MB/s eta 0:00:00
Collecting networkx
  Downloading networkx-3.3-py3-none-any.whl (1.7 MB)
     ---------------------------------------- 1.7/1.7 MB 112.8 MB/s eta 0:00:00
Collecting requests
  Downloading requests-2.32.3-py3-none-any.whl (64 kB)
     ---------------------------------------- 64.9/64.9 kB 3.6 MB/s eta 0:00:00
Collecting numpy
  Downloading numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB)
     ---------------------------------------- 15.8/15.8 MB 81.8 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0
  Downloading pillow-10.3.0-cp310-cp310-win_amd64.whl (2.5 MB)
     ---------------------------------------- 2.5/2.5 MB 168.1 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0
  Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB)
Collecting certifi>=2017.4.17
  Downloading certifi-2024.6.2-py3-none-any.whl (164 kB)
     ---------------------------------------- 164.4/164.4 kB ? eta 0:00:00
Collecting charset-normalizer<4,>=2
  Downloading charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB)
     ---------------------------------------- 100.3/100.3 kB 6.0 MB/s eta 0:00:00
Collecting urllib3<3,>=1.21.1
  Downloading urllib3-2.2.1-py3-none-any.whl (121 kB)
     ---------------------------------------- 121.1/121.1 kB ? eta 0:00:00
Collecting idna<4,>=2.5
  Downloading idna-3.7-py3-none-any.whl (66 kB)
     ---------------------------------------- 66.8/66.8 kB ? eta 0:00:00
Collecting mpmath<1.4.0,>=1.1.0
  Downloading https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
     ---------------------------------------- 536.2/536.2 kB 32.9 MB/s eta 0:00:00
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.6.2 charset-normalizer-3.3.2 filelock-3.15.1 fsspec-2024.6.0 idna-3.7 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 pillow-10.3.0 requests-2.32.3 sympy-1.12.1 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.12.2 urllib3-2.2.1

[notice] A new release of pip is available: 23.0.1 -> 24.0
[notice] To update, run: C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip
Installing clip
Installing open_clip
Cloning assets into C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\stable-diffusion-webui-assets...
Cloning into 'C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 66.35 MiB/s, done.
Cloning Stable Diffusion into C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\stable-diffusion-stability-ai...
Cloning into 'C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (304/304), done.
remote: Total 580 (delta 278), reused 448 (delta 249), pack-reused 9Receiving objects:  92% (534/580), 72.88 MiB | 16.25 MiB/s
Receiving objects: 100% (580/580), 73.44 MiB | 16.33 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\generative-models...
Cloning into 'C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\generative-models'...
remote: Enumerating objects: 941, done.
remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
Receiving objects:  97% (913/941), 42.96 MiB | 15.21 MiB/s
Receiving objects: 100% (941/941), 43.85 MiB | 15.43 MiB/s, done.
Resolving deltas: 100% (490/490), done.
Cloning K-diffusion into C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\k-diffusion...
Cloning into 'C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\k-diffusion'...
remote: Enumerating objects: 1345, done.
remote: Counting objects: 100% (1345/1345), done.
remote: Compressing objects: 100% (434/434), done.
Receiving objects:  99% (1332/1345)sed 1264 (delta 904), pack-reused 0
Receiving objects: 100% (1345/1345), 239.04 KiB | 17.07 MiB/s, done.
Resolving deltas: 100% (944/944), done.
Cloning BLIP into C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\BLIP...
Cloning into 'C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112Receiving objects:  99% (275/277)
Receiving objects: 100% (277/277), 7.03 MiB | 29.28 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [00:45<00:00, 93.7MB/s]
Calculating sha256 for C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 212.1s (prepare environment: 157.2s, import torch: 3.5s, import gradio: 1.1s, setup paths: 1.4s, initialize shared: 0.3s, other imports: 1.2s, list SD models: 46.2s, load scripts: 0.6s, create ui: 0.2s, gradio launch: 0.2s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\username\Desktop\SD\test\stable-diffusion-webui\configs\v1-inference.yaml
C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Applying attention optimization: Doggettx... done.
Model loaded in 5.3s (calculate hash: 3.1s, create model: 0.2s, apply weights to model: 1.8s).
Reusing loaded model v1-5-pruned-emaonly.safetensors [6ce0161689] to load baxlBlueArchiveFlatCelluloidStyleFineTune_xlv3.safetensors
Calculating sha256 for C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\Stable-diffusion\baxlBlueArchiveFlatCelluloidStyleFineTune_xlv3.safetensors: 95affd8c8f664e6f30c1c5dd54723eacf1dbf3f02da15cb82a67a21eb003875f
Loading weights [95affd8c8f] from C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\Stable-diffusion\baxlBlueArchiveFlatCelluloidStyleFineTune_xlv3.safetensors
Creating model from config: C:\Users\username\Desktop\SD\test\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 2.6s (create model: 0.2s, apply weights to model: 2.2s).
                                  Downloading VAEApprox model to: C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\VAE-approx\vaeapprox-sdxl.pt
100%|███████████████████████████████████████████████████████████████████████████████| 209k/209k [00:00<00:00, 26.8MB/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  6.46it/s]
Downloading: "https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x2.pth" to C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\DAT\DAT_x2.pth

100%|██████████████████████████████████████████████████████████████████████████████████| 134/134 [00:00<00:00, 134kB/s]
*** Error verifying pickled file from C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\DAT\DAT_x2.pth
*** The file may be malicious, so the program is not going to read it.
*** You can skip this check with --disable-safe-unpickle commandline argument.
***
    Traceback (most recent call last):
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\safe.py", line 83, in check_pt
        with zipfile.ZipFile(filename) as z:
      File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1269, in __init__
        self._RealGetContents()
      File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1336, in _RealGetContents
        raise BadZipFile("File is not a zip file")
    zipfile.BadZipFile: File is not a zip file

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\safe.py", line 137, in load_with_extra
        check_pt(filename, extra_handler)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\safe.py", line 104, in check_pt
        unpickler.load()
      File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
        dispatch[key[0]](self)
    KeyError: 118

---
*** Error completing request
*** Arguments: ('task(yr0ipsqvkga2y51)', <gradio.routes.Request object at 0x000002A1A2F7E500>, '1girls, arona_\\(Blue_Archive\\), Blue_Archive, ', 'lowres, error, worst quality, low quality, jpeg artifacts, watermark, signature, username', [], 1, 1, 7, 920, 768, True, 0.45, 2, 'DAT x2', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'Euler a', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 1344, in sample
        return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 1393, in sample_hr_pass
        image = images.resize_image(0, image, target_width, target_height, upscaler_name=self.hr_upscaler)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\images.py", line 288, in resize_image
        res = resize(im, width, height)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\images.py", line 280, in resize
        im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\upscaler.py", line 68, in upscale
        img = self.do_upscale(img, selected_model)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\dat_model.py", line 32, in do_upscale
        model_descriptor = modelloader.load_spandrel_model(
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\modelloader.py", line 150, in load_spandrel_model
        model_descriptor = spandrel.ModelLoader(device=device).load_from_file(str(path))
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\loader.py", line 41, in load_from_file
        state_dict = self.load_state_dict_from_file(path)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\loader.py", line 70, in load_state_dict_from_file
        return canonicalize_state_dict(state_dict)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\canonicalize.py", line 27, in canonicalize_state_dict
        if unwrap_key in state_dict and isinstance(state_dict[unwrap_key], dict):
    TypeError: argument of type 'NoneType' is not iterable

---

Additional information

I have recently updated my environment with updated webui, drivers, Python, etc.

CommieDog commented 3 weeks ago

Looking at the console log, this caught my attention:

*** Error verifying pickled file from C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\DAT\DAT_x2.pth
*** The file may be malicious, so the program is not going to read it.
*** You can skip this check with --disable-safe-unpickle commandline argument.

Looks like a possible cause, plus a workaround. Have you tried using --disable-safe-unpickle?

sinand99 commented 3 weeks ago

It is because this URL that A1111 uses to download is not valid now: "https://raw.githubusercontent.com/n0kovo/dat_upscaler_models/main/DAT/DAT_x2.pth"

You can download the models from the original page here Just put them in "models\DAT" folder and it will work.

den3asphalt commented 2 weeks ago

Looking at the console log, this caught my attention:

*** Error verifying pickled file from C:\Users\username\Desktop\SD\test\stable-diffusion-webui\models\DAT\DAT_x2.pth
*** The file may be malicious, so the program is not going to read it.
*** You can skip this check with --disable-safe-unpickle commandline argument.

Looks like a possible cause, plus a workaround. Have you tried using --disable-safe-unpickle?

I have already tried this and it did not solve the problem, only giving me different errors.

*** Error completing request███████████████████████████████████▍                       | 18/28 [00:02<00:00, 10.74it/s]
*** Arguments: ('task(6vxgwoqgi4t2zi2)', <gradio.routes.Request object at 0x00000184281DDC90>, '1girl, mika_\\(Blue_Archive\\), Blue_Archive,', '', [], 1, 1, 7, 512, 512, True, 0.7, 2, 'DAT x2', 10, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 18, 'Euler a', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 1344, in sample
        return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\processing.py", line 1393, in sample_hr_pass
        image = images.resize_image(0, image, target_width, target_height, upscaler_name=self.hr_upscaler)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\images.py", line 288, in resize_image
        res = resize(im, width, height)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\images.py", line 280, in resize
        im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\upscaler.py", line 68, in upscale
        img = self.do_upscale(img, selected_model)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\dat_model.py", line 32, in do_upscale
        model_descriptor = modelloader.load_spandrel_model(
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\modelloader.py", line 150, in load_spandrel_model
        model_descriptor = spandrel.ModelLoader(device=device).load_from_file(str(path))
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\loader.py", line 41, in load_from_file
        state_dict = self.load_state_dict_from_file(path)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\loader.py", line 60, in load_state_dict_from_file
        state_dict = self._load_pth(path)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\loader.py", line 82, in _load_pth
        return torch.load(
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\safe.py", line 108, in load
        return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\modules\safe.py", line 156, in load_with_extra
        return unsafe_torch_load(filename, *args, **kwargs)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1028, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1246, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
      File "C:\Users\username\Desktop\SD\test\stable-diffusion-webui\venv\lib\site-packages\spandrel\__helpers\unpickler.py", line 29, in <lambda>
        load=lambda *args, **kwargs: RestrictedUnpickler(*args, **kwargs).load(),
      File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
        dispatch[key[0]](self)
    KeyError: 118

---
den3asphalt commented 2 weeks ago

It is because this URL that A1111 uses to download is not valid now: "https://raw.githubusercontent.com/n0kovo/dat_upscaler_models/main/DAT/DAT_x2.pth"

You can download the models from the original page here Just put them in "models\DAT" folder and it will work.

Thanks for letting me know. I tried the model I downloaded from "pretrained models" at this URL and it worked. It looks like the problem occurred in #14690.

Hopefully this will be fixed in the next webui. This issue will be kept open until the code is corrected.