lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 175 forks source link

[Feature Request]: After updating Controlnet 1.1.431, IP Adapter does not work again #362

Closed SunGreen777 closed 4 months ago

SunGreen777 commented 5 months ago

Is there an existing issue for this?

What would your feature do ?

After updating Controlnet 1.1.431, IP Adapter does not work again Clean installation + only CN got enough results

Line 82 clip_vision_h_uc = torch.load(clip_vision_h_uc, map_location=torch.device('cpu'))['uc'] Ok Does it work for you?

Additional information

2024-01-19 23:43:55,564 - ControlNet - INFO - unit_separate = False, style_align = False 6/20 [00:08<00:17, 1.27s/it] 2024-01-19 23:43:55,706 - ControlNet - INFO - Loading model: ip-adapter-plus_sd15 [c817b455] 2024-01-19 23:43:55,749 - ControlNet - INFO - Loaded state_dict from [W:\stable-diffusion-webui-directml\models\ControlNet\ip-adapter-plus_sd15.pth] 2024-01-19 23:43:56,002 - ControlNet - INFO - ControlNet model ip-adapter-plus_sd15 [c817b455] loaded. 2024-01-19 23:43:56,012 - ControlNet - INFO - Loading preprocessor: ip-adapter_clip_sd15 2024-01-19 23:43:56,012 - ControlNet - INFO - preprocessor resolution = 512 Error running process: W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "W:\stable-diffusion-webui-directml\modules\scripts.py", line 718, in process script.process(p, script_args) File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1073, in process self.controlnet_hack(p) File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1058, in controlnet_hack self.controlnet_main_entry(p) File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 885, in controlnet_main_entry detected_map, is_image = preprocessor( File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 75, in decorated_func return cached_func(*args, *kwargs) File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 63, in cached_func return func(args, kwargs) File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\global_state.py", line 37, in unified_preprocessor return preprocessor_modules[preprocessor_name](*args, kwargs) File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\processor.py", line 357, in clip from annotator.clipvision import ClipVisionDetector File "W:\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\clipvision__init__.py", line 85, in clip_vision_vith_uc = torch.load(clip_vision_vith_uc, map_location=devices.get_device_for("controlnet"))['uc'] File "W:\stable-diffusion-webui-directml\modules\safe.py", line 108, in load return load_with_extra(filename, *args, extra_handler=global_extra_handler, *kwargs) File "W:\stable-diffusion-webui-directml\modules\safe.py", line 156, in load_with_extra return unsafe_torch_load(filename, args, kwargs) File "W:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "W:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load result = unpickler.load() File "C:\Users\Havemoney\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load dispatchkey[0] File "C:\Users\Havemoney\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid self.append(self.persistent_load(pid)) File "W:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "W:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor wrap_storage=restore_location(storage, location), File "W:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1086, in restore_location return default_restore_location(storage, str(map_location)) File "W:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 220, in default_restore_location raise RuntimeError("don't know how to restore data location of " RuntimeError: don't know how to restore data location of torch.storage.UntypedStorage (tagged with privateuseone:0)

SunGreen777 commented 5 months ago

I don't know what's going on, I reinstalled A1111, only the IP Adapter doesn't work, Am I really alone? Am I really alone? Am I really alone?

azamet90 commented 5 months ago

I don't know what's going on, I reinstalled A1111, only the IP Adapter doesn't work, Am I really alone? Am I really alone? Am I really alone?

same here

CS1o commented 5 months ago

I have the same problem with IP-Adapters not working anymore. Updated Controlnet a few minutes ago to the new version (ControlNet v1.1.434) and its still broken with the following error:

2024-01-21 22:34:25,359 - ControlNet - INFO - unit_separate = False, style_align = False
2024-01-21 22:34:25,522 - ControlNet - INFO - Loading model: ip-adapter-full-face_sd15 [852b9843]
2024-01-21 22:34:25,528 - ControlNet - INFO - Loaded state_dict from [D:\Programme\AI-Zeug\stable-diffusion-webui-directml\models\ControlNet\ip-adapter-full-face_sd15.safetensors]
2024-01-21 22:34:25,637 - ControlNet - INFO - ControlNet model ip-adapter-full-face_sd15 [852b9843] loaded.
2024-01-21 22:34:25,640 - ControlNet - INFO - Loading preprocessor: ip-adapter_clip_sd15
2024-01-21 22:34:25,640 - ControlNet - INFO - preprocessor resolution = 512
*** Error running process: D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\scripts.py", line 718, in process
        script.process(p, *script_args)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1073, in process
        self.controlnet_hack(p)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1058, in controlnet_hack
        self.controlnet_main_entry(p)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 885, in controlnet_main_entry
        detected_map, is_image = preprocessor(
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 76, in decorated_func
        return cached_func(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 64, in cached_func
        return func(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\global_state.py", line 37, in unified_preprocessor
        return preprocessor_modules[preprocessor_name](*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\processor.py", line 360, in clip
        clip_encoder[config] = ClipVisionDetector(config, low_vram)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\clipvision\__init__.py", line 116, in __init__
        sd = torch.load(file_path, map_location=self.device)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\safe.py", line 108, in load
        return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\safe.py", line 156, in load_with_extra
        return unsafe_torch_load(filename, *args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
        return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
        result = unpickler.load()
      File "C:\Users\webyo\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
        dispatch[key[0]](self)
      File "C:\Users\webyo\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid
        self.append(self.persistent_load(pid))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
        typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
        wrap_storage=restore_location(storage, location),
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1086, in restore_location
        return default_restore_location(storage, str(map_location))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 220, in default_restore_location
        raise RuntimeError("don't know how to restore data location of "
    RuntimeError: don't know how to restore data location of torch.storage.UntypedStorage (tagged with privateuseone:0)

When starting the the webui-user.bat or when selecting the new IP-Adapter-FaceID Preprocessors the following error appears: (To mention im not using --OONX)

2024-01-21 22:52:57.7141264 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
2024-01-21 22:52:58.0830521 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
SunGreen777 commented 5 months ago

I have the same problem with IP-Adapters not working anymore. Updated Controlnet a few minutes ago to the new version

Likewise, whoever is in contact, send a message to the developers

SunGreen777 commented 5 months ago

A clean install doesn't help either.

lshqqytiger commented 5 months ago

Try --use-cpu controlnet. You can't access storage of torch-directml. About FaceID, it depends on onnxruntime and there is onnxruntime-directml package. (I don't know whether FaceID will work with DirectML)

CS1o commented 5 months ago

That worked for the IP-Adapter, BUT now all other ControlNet models dont work anymore. (depth, openpose, etc.) Getting following error when using them:

2024-01-23 23:21:48,782 - ControlNet - INFO - unit_separate = False, style_align = False
2024-01-23 23:21:48,943 - ControlNet - INFO - Loading model: control_v11f1p_sd15_depth [4b72d323]
2024-01-23 23:21:48,991 - ControlNet - INFO - Loaded state_dict from [D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\models\control_v11f1p_sd15_depth.safetensors]
2024-01-23 23:21:48,991 - ControlNet - INFO - controlnet_default_config
2024-01-23 23:21:50,652 - ControlNet - INFO - ControlNet model control_v11f1p_sd15_depth [4b72d323] loaded.
2024-01-23 23:21:50,678 - ControlNet - INFO - Using preprocessor: depth
2024-01-23 23:21:50,678 - ControlNet - INFO - preprocessor resolution = 512
2024-01-23 23:21:54,665 - ControlNet - INFO - ControlNet Hooked - Time = 5.8860485553741455
  0%|                                                                                           | 0/30 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(m5lgqqjrg79d8xi)', '1girl, masterpiece,', 'blurry, deformed,', [], 30, 'DPM++ 2M SDE Karras', 1, 1, 7, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000020C016D3FA0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 512, 64, True, True, True, False, UiControlNetUnit(enabled=True, module='depth_midas', model='control_v11f1p_sd15_depth [4b72d323]', weight=1, image={'image': array([[[186, 114, 107],
***         [197, 129, 111],
***         [178,  98,  90],
***         ...,
***         [191, 188, 195],
***         [204, 201, 209],
***         [189, 198, 194]],
***
***        [[183, 109,  90],
***         [180, 112,  78],
***         [177, 102,  79],
***         ...,
***         [183, 184, 188],
***         [180, 182, 185],
***         [201, 214, 210]],
***
***        [[175, 101,  81],
***         [174, 103,  81],
***         [185, 103,  82],
***         ...,
***         [185, 183, 183],
***         [176, 172, 174],
***         [192, 202, 202]],
***
***        ...,
***
***        [[ 34,  15,  14],
***         [ 47,  25,  24],
***         [ 61,  35,  29],
***         ...,
***         [ 55,  32,  17],
***         [ 53,  32,  17],
***         [ 50,  23,  14]],
***
***        [[ 35,  16,  14],
***         [ 44,  18,  19],
***         [ 56,  25,  27],
***         ...,
***         [ 51,  27,  12],
***         [ 56,  33,  18],
***         [ 51,  23,  13]],
***
***        [[ 41,  19,  19],
***         [ 43,  20,  19],
***         [ 53,  27,  27],
***         ...,
***         [ 52,  23,  10],
***         [ 49,  24,  15],
***         [ 51,  27,  16]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, '', 0.5, True, False, '', 'Lerp', False, False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CPU', False, 0, 'None', '', None, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, False, False, False, '#000000', False, None, None, False, None, None, False, None, None, False, 50, 'Positive', 0, ', ', 'Generate and always save', 32) {}
    Traceback (most recent call last):
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\txt2img.py", line 64, in txt2img
        processed = processing.process_images(p)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\processing.py", line 735, in process_images
        res = process_images_inner(p)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 41, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\processing.py", line 872, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\hook.py", line 435, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\processing.py", line 1146, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 240, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 240, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\hook.py", line 845, in forward_webui
        raise e
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\hook.py", line 842, in forward_webui
        return forward(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\hook.py", line 570, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\cldm.py", line 310, in forward
        h = module(h, emb, context)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 102, in forward
        x = layer(x)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 501, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
      File "D:\Programme\AI-Zeug\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 13, in forward
        return op(*args, **kwargs)
    RuntimeError: tensor.device().type() == at::DeviceType::PrivateUse1 INTERNAL ASSERT FAILED at "D:\\a\\_work\\1\\s\\pytorch-directml-plugin\\torch_directml\\csrc\\dml\\DMLTensor.cpp":31, please report a bug to PyTorch. unbox expects Dml at::Tensor as inputs

So its one or the other now? What changed? IP-Adapter worked for months on the GPU.

Try --use-cpu controlnet. You can't access storage of torch-directml. About FaceID, it depends on onnxruntime and there is onnxruntime-directml package. (I don't know whether FaceID will work with DirectML)

SunGreen777 commented 5 months ago

Try --use-cpu controlnet. You can't access storage of torch-directml. About FaceID, it depends on onnxruntime and there is onnxruntime-directml package. (I don't know whether FaceID will work with DirectML)

Thank you! Enabled in settings image

CS1o commented 5 months ago

The dev from controlnet added the ControlNet setting "Load CLIP preprocessor model on CPU" That fixes the error with IP-Adapter, so no need to use the --use-cpu controlnet arg.

I also tested the new FaceID Models+Loras, and they work too.