Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
17.1k stars 1.97k forks source link

[Bug]: face_id_plus generates RuntimeError: don't know how to restore data location of torch.storage.UntypedStorage #2642

Open ride5k opened 9 months ago

ride5k commented 9 months ago

Is there an existing issue for this?

What happened?

face_id working properly, but plus version does not. error message generated on console and result does not show expected inference.

Steps to reproduce the problem

  1. Go to txt2img, enter prompt and enable ip-adapter-faceid-plus_sd15_lora
  2. Enable controlnet, select ip-adapter_face_id_plus as preprocessor, ip-adapter-faceid-plus_sd15 as model
  3. Upload sample image
  4. Generate
  5. Console shows error, generation completes but without controlnet influence

What should have happened?

no error on console, generated image shows controlnet influence

Commit where the problem happens

webui: 1.7.0 controlnet: ControlNet v1.1.440

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--listen --no-half --precision full --no-half-vae --theme=dark --disable-nan-check --disable-safe-unpickle --medvram --sub-quad-q-chunk-size 1024 --sub-quad-kv-chunk-size 256 --sub-quad-chunk-threshold 50 --skip-torch-cuda-test --use-directml --api --cors-allow-origins=http://127.0.0.1:3456 --enable-insecure-extension-access

List of enabled extensions

Screenshot 2024-02-14 110821

Console logs

venv "T:\auto1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: 835ee2013fc46230271a02a002b4ba08c689f62d
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Launching Web UI with arguments: --listen --no-half --precision full --no-half-vae --theme=dark --disable-nan-check --disable-safe-unpickle --medvram --sub-quad-q-chunk-size 1024 --sub-quad-kv-chunk-size 256 --sub-quad-chunk-threshold 50 --skip-torch-cuda-test --use-directml --api --cors-allow-origins=http://127.0.0.1:3456 --enable-insecure-extension-access
ONNX: selected=CPUExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Civitai Helper: Get Custom Model Folder
ControlNet preprocessor location: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2024-02-14 11:04:15,277 - ControlNet - INFO - ControlNet v1.1.440
2024-02-14 11:04:15,390 - ControlNet - INFO - ControlNet v1.1.440
Loading weights [e5f3cbc5f7] from T:\auto1111\stable-diffusion-webui-directml\models\Stable-diffusion\txt2img\realisticVisionV60B1_v60B1VAE.safetensors
2024-02-14 11:04:16,275 - ControlNet - INFO - ControlNet UI callback registered.
Civitai Helper: Settings:
Civitai Helper: max_size_preview: True
Civitai Helper: skip_nsfw_preview: False
Civitai Helper: open_url_with_js: True
Civitai Helper: proxy:
Civitai Helper: use civitai api key: False
Creating model from config: T:\auto1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
Running on local URL:  http://0.0.0.0:7860
Loading VAE weights specified in settings: T:\auto1111\stable-diffusion-webui-directml\models\VAE\vae-ft-ema-560000-ema-pruned.ckpt
Applying attention optimization: sub-quadratic... done.
Model loaded in 3.1s (load weights from disk: 1.0s, create model: 0.3s, apply weights to model: 1.1s, load VAE: 0.2s, calculate empty prompt: 0.4s).

To create a public link, set `share=True` in `launch()`.
Startup time: 18.1s (prepare environment: 11.7s, initialize shared: 1.6s, list SD models: 0.7s, load scripts: 2.4s, create ui: 1.9s, gradio launch: 4.3s, app_started_callback: 0.3s).
2024-02-14 11:05:44,134 - ControlNet - INFO - unit_separate = False, style_align = False
2024-02-14 11:05:44,418 - ControlNet - INFO - Loading model: ip-adapter-faceid-plus_sd15 [d86a490f]
2024-02-14 11:05:44,473 - ControlNet - INFO - Loaded state_dict from [T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\models\ip-adapter-faceid-plus_sd15.bin]
2024-02-14 11:05:44,849 - ControlNet - INFO - ControlNet model ip-adapter-faceid-plus_sd15 [d86a490f](ControlModelType.IPAdapter) loaded.
2024-02-14 11:05:44,870 - ControlNet - INFO - Using preprocessor: ip-adapter_face_id_plus
2024-02-14 11:05:44,871 - ControlNet - INFO - preprocessor resolution = 512
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
*** Error running process: T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "T:\auto1111\stable-diffusion-webui-directml\modules\scripts.py", line 718, in process
        script.process(p, *script_args)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1143, in process
        self.controlnet_hack(p)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1128, in controlnet_hack
        self.controlnet_main_entry(p)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 969, in controlnet_main_entry
        controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in input_images]))
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 969, in <listcomp>
        controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in input_images]))
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 936, in preprocess_input_image
        detected_map, is_image = self.preprocessor[unit.module](
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 80, in decorated_func
        return cached_func(*args, **kwargs)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 64, in cached_func
        return func(*args, **kwargs)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\global_state.py", line 37, in unified_preprocessor
        return preprocessor_modules[preprocessor_name](*args, **kwargs)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\processor.py", line 831, in face_id_plus
        clip_embed, _ = clip(img, config='clip_h', low_vram=low_vram)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\processor.py", line 392, in clip
        clip_encoder[config] = ClipVisionDetector(config, low_vram)
      File "T:\auto1111\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\clipvision\__init__.py", line 115, in __init__
        sd = torch.load(file_path, map_location=self.device)
      File "T:\auto1111\stable-diffusion-webui-directml\modules\safe.py", line 108, in load
        return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
      File "T:\auto1111\stable-diffusion-webui-directml\modules\safe.py", line 156, in load_with_extra
        return unsafe_torch_load(filename, *args, **kwargs)
      File "T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
        return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
      File "T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
        result = unpickler.load()
      File "C:\Users\gilberts\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
        dispatch[key[0]](self)
      File "C:\Users\gilberts\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid
        self.append(self.persistent_load(pid))
      File "T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
        typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
      File "T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
        wrap_storage=restore_location(storage, location),
      File "T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1086, in restore_location
        return default_restore_location(storage, str(map_location))
      File "T:\auto1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 220, in default_restore_location
        raise RuntimeError("don't know how to restore data location of "
    RuntimeError: don't know how to restore data location of torch.storage.UntypedStorage (tagged with privateuseone:0)

---

Additional information

No response

huchenlei commented 9 months ago

What type of hardware are you using? Is it an intel or AMD graphics card?

ride5k commented 9 months ago

What type of hardware are you using? Is it an intel or AMD graphics card?

@huchenlei i am running the directml fork on AMD.

huchenlei commented 9 months ago

I am not sure how should we approach this, as indicated by info, we are running insightface on CPU Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}.

Maybe you want to file an issue in insightface repo?