tencent-ailab / IP-Adapter

The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
Apache License 2.0
4.48k stars 293 forks source link

[BUG] [MacOS]RuntimeError: "addmm_impl_cpu_" not implemented for 'Half' Time taken: 10.5 sec. #316

Open cl000100 opened 3 months ago

cl000100 commented 3 months ago

MacOS Sonoma 14.4 (23E214) Apple M1 Ultra 64 GB

image

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 19.5s (prepare environment: 0.4s, import torch: 2.0s, import gradio: 0.7s, setup paths: 0.5s, initialize shared: 0.1s, other imports: 0.5s, load scripts: 11.3s, create ui: 0.9s, gradio launch: 3.1s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 6.7s (load weights from disk: 0.2s, create model: 0.7s, apply weights to model: 5.3s, move model to device: 0.1s, calculate empty prompt: 0.2s).
2024-03-14 08:47:10,439 - ControlNet - INFO - unit_separate = False, style_align = False
2024-03-14 08:47:10,602 - ControlNet - INFO - Loading model: ip-adapter_sd15 [6a3f6166]
2024-03-14 08:47:10,622 - ControlNet - INFO - Loaded state_dict from [/Users/lei/stable-diffusion-webui/models/ControlNet/ip-adapter_sd15.pth]
2024-03-14 08:47:10,694 - ControlNet - INFO - ControlNet model ip-adapter_sd15 [6a3f6166](ControlModelType.IPAdapter) loaded.
2024-03-14 08:47:10,694 - ControlNet - INFO - Using preprocessor: ip-adapter_clip_sd15
2024-03-14 08:47:10,694 - ControlNet - INFO - preprocessor resolution = 512
2024-03-14 08:47:20,411 - ControlNet - INFO - ControlNet Hooked - Time = 9.978827714920044
  0%|                                                    | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(uc665m7278h2m38)', <gradio.routes.Request object at 0x38ee35750>, '', '', [], 20, 'Euler a', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'mediapipe_face_full', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', True, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, False, 512, 64, True, True, True, False, False, '', 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', UiControlNetUnit(enabled=True, module='ip-adapter_clip_sd15', model='ip-adapter_sd15 [6a3f6166]', weight=1, image={'image': array([[[  6,  37,  58],
***         [  8,  39,  60],
***         [ 10,  41,  62],
***         ...,
***         [ 19,  36,  62],
***         [ 12,  29,  55],
***         [ 12,  28,  54]],
***
***        [[ 10,  41,  62],
***         [  8,  39,  60],
***         [  7,  38,  59],
***         ...,
***         [  5,  25,  50],
***         [ 12,  29,  55],
***         [ 15,  31,  57]],
***
***        [[ 10,  41,  62],
***         [  7,  38,  59],
***         [  6,  37,  58],
***         ...,
***         [ 10,  30,  54],
***         [ 15,  33,  57],
***         [  7,  25,  49]],
***
***        ...,
***
***        [[ 19,  28,  59],
***         [ 19,  28,  59],
***         [ 19,  30,  60],
***         ...,
***         [ 13,  34,  77],
***         [ 15,  36,  79],
***         [  6,  27,  70]],
***
***        [[ 25,  31,  63],
***         [ 18,  24,  56],
***         [ 23,  32,  63],
***         ...,
***         [ 19,  41,  82],
***         [ 21,  42,  85],
***         [ 43,  64, 107]],
***
***        [[ 21,  27,  59],
***         [ 21,  27,  59],
***         [ 20,  29,  60],
***         ...,
***         [ 16,  38,  79],
***         [ 14,  35,  78],
***         [ 16,  37,  80]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "/Users/lei/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/lei/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "/Users/lei/stable-diffusion-webui/modules/processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 446, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/processing.py", line 1257, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "/Users/lei/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/Users/lei/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "/Users/lei/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/Users/lei/stable-diffusion-webui/modules/sd_hijack_utils.py", line 30, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/sd_hijack_unet.py", line 48, in apply_model
        return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 871, in forward_webui
        raise e
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 868, in forward_webui
        return forward(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 775, in forward
        h = module(h, emb, context)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "/Users/lei/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 273, in _forward
        x = self.attn2(self.norm2(x), context=context) + x
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlmodel_ipadapter.py", line 468, in attn_forward_hacked
        out = out + f(self, x, q)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlmodel_ipadapter.py", line 670, in forward
        ip_k = self.call_ip(k_key, cond_uncond_image_emb, device=q.device)
      File "/Users/lei/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlmodel_ipadapter.py", line 651, in call_ip
        ip = self.ipadapter.ip_layers.to_kvs[key](feat).to(device)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/modules/devices.py", line 164, in forward_wrapper
        result = self.org_forward(*args, **kwargs)
      File "/Users/lei/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 500, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "/Users/lei/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
cl000100 commented 3 months ago

已解决,在启动的时候加上 --no-half即可解决。