AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.86k stars 26.93k forks source link

[Bug]: After I update automatic 1111 web ui with [git pull] on Mac m2 pro, it seems like img2img not working #15253

Open ImpFaidon opened 8 months ago

ImpFaidon commented 8 months ago

Checklist

What happened?

img2img not working and giving error messages. I tried [--no-half] on the Commandline configs of web-ui.bat and that solved the problem for a bit but and also changed Stable Diffusion settings to *Upcast cross attention layer to float32. I don't know if it has to do with my torch version or something but I've never experienced a same problem before the update.

The error message I got after insane its/sec is this: return F.linear(input, self.weight, self.bias) RuntimeError: "addmm_implcpu" not implemented for 'Half'

And it seems to be related with ControlNet and Tensorflow error when I use IP-adapter and Openpose

Steps to reproduce the problem

  1. I upload a photo on inpaint
  2. I sketched the face
  3. I chose realvis_xl checkpoint
  4. I set only-masked and the auto selection of dims (using triangle ruler button)
  5. I enabled ControlNet ip-adapter and openpose in pixel-perfect selection
  6. I hit generate

What should have happened?

To run img2img with no issues and not stop runtime

What browsers do you use to access the UI ?

Apple Safari

Sysinfo

sysinfo-2024-03-14-10-03.json

Console logs

18%|█████████▊                                           | 7/38 [04:46<21:07, 40.89s/it]
*** Error completing request██▉                 | 14/38 [11:41<17:45, 44.41s/it]
*** Arguments: ('task(yv0dpwr0r2lbt1v)', 2, 'beautiful 20 years old woman on the beach, full body shot, skinny athletic body type, (silver blonde hair with bold black roots :1.5), (white light skin:1.5), soft lighting, realistic detail, textured skin, skin natural pores, wearing black streetwear clothing,\n(glossy skin:1.4), Canon EOS R6 Mark II shot, dim volumetric lighting, 8k octane beautifully detailed render, post-processing, extremely hyper detailed,  <lora:add-detail-xl:1>, <lora:ip-adapter-faceid_sdxl_lora:1>', '(deformed iris, deformed pupils), text, worst quality, low quality, ugly, deformed, noisy, blurry, low contrast, text, 3d, cgi, render, anime, open mouth, big forehead, long neck, extra limbs, cleavage, (low quality, worst quality:1.4), long neck, film grain, (analog photo:1.8), bad hands, bad anatomy, nsfw, nude', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=1536x2560 at 0x36216FC70>, 'mask': <PIL.Image.Image image mode=RGB size=1536x2560 at 0x36216E650>}, <PIL.Image.Image image mode=RGB size=1536x2560 at 0x36216EB00>, <PIL.Image.Image image mode=RGB size=1536x2560 at 0x350E6C040>, None, None, 60, 'DPM++ 2M SDE Karras', 4, 0, 1, 1, 1, 5, 1.5, 0.62, 0.0, 2560, 1536, 1, 0, 1, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x157b2bd90>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=True, module='ip-adapter_clip_sdxl', model='ip-adapter_xl [4209e9f7]', weight=1, image={'image': array([[[ 67, 131, 218],
***         [ 82, 146, 233],
***         [ 83, 147, 234],
***         ...,
***         [ 77, 141, 228],
***         [ 75, 139, 226],
***         [ 62, 126, 213]],
*** 
***        [[ 83, 147, 234],
***         [ 97, 161, 248],
***         [ 98, 162, 249],
***         ...,
***         [ 92, 156, 243],
***         [ 91, 155, 242],
***         [ 79, 143, 230]],
*** 
***        [[ 80, 144, 231],
***         [ 92, 156, 243],
***         [ 92, 156, 243],
***         ...,
***         [ 87, 151, 238],
***         [ 88, 152, 239],
***         [ 78, 142, 229]],
*** 
***        ...,
*** 
***        [[132, 136, 147],
***         [137, 141, 152],
***         [145, 149, 160],
***         ...,
***         [ 80,  88, 137],
***         [ 69,  77, 126],
***         [ 74,  82, 131]],
*** 
***        [[132, 136, 147],
***         [135, 139, 150],
***         [141, 145, 156],
***         ...,
***         [128, 136, 185],
***         [124, 132, 181],
***         [107, 115, 164]],
*** 
***        [[ 92,  96, 107],
***         [ 93,  97, 108],
***         [ 96, 100, 111],
***         ...,
***         [ 75,  83, 132],
***         [ 82,  90, 139],
***         [ 71,  79, 128]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        ...,
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0.15, guidance_end=0.88, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=True, module='openpose_full', model='thibaud_xl_openpose [c7b9cadd]', weight=1, image={'image': array([[[177, 194, 199],
***         [177, 195, 198],
***         [177, 195, 199],
***         ...,
***         [166, 208, 233],
***         [167, 209, 232],
***         [167, 208, 231]],
*** 
***        [[177, 195, 200],
***         [177, 194, 199],
***         [178, 194, 199],
***         ...,
***         [166, 208, 235],
***         [166, 207, 234],
***         [167, 209, 235]],
*** 
***        [[177, 195, 200],
***         [177, 194, 199],
***         [178, 195, 199],
***         ...,
***         [167, 208, 234],
***         [167, 207, 234],
***         [168, 208, 234]],
*** 
***        ...,
*** 
***        [[210, 188, 159],
***         [212, 191, 161],
***         [216, 195, 166],
***         ...,
***         [ 78,  90,  95],
***         [ 79,  92,  96],
***         [ 78,  90,  95]],
*** 
***        [[214, 194, 165],
***         [209, 187, 160],
***         [213, 190, 164],
***         ...,
***         [ 80,  94,  99],
***         [ 74,  86,  93],
***         [ 76,  88,  95]],
*** 
***        [[209, 193, 161],
***         [208, 190, 161],
***         [208, 188, 160],
***         ...,
***         [ 78,  89,  93],
***         [ 76,  87,  93],
***         [ 77,  89,  95]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        ...,
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
*** 
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=0.5, pixel_perfect=True, control_mode='ControlNet is more important', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, '', None, True, False, False, False, False, False, 0, 0, '0', 0, False, True, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', False, False, False, 1, 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, 1, 1, '', 1, 1, ['After Upscaling/Before Restore Face'], 0, 'Portrait of a [gender]', 'blurry', 20, ['DPM++ 2M Karras'], '', 0, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CPU', False, 0, 'None', '', None, False, False, 0.5, 0, None, False, '0', '/Users/imperius_f/stable-diffusion-webui/models/roop/inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "/Users/imperius_f/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/Users/imperius_f/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/img2img.py", line 235, in img2img
        processed = process_images(p)
      File "/Users/imperius_f/stable-diffusion-webui/modules/processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 446, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/processing.py", line 1661, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_models_xl.py", line 44, in apply_model
        return self.model(x, t, cond)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_hijack_utils.py", line 30, in __call__
        return self.__sub_func(self.__orig_func, *args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/sd_hijack_unet.py", line 48, in apply_model
        return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 871, in forward_webui
        raise e
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 868, in forward_webui
        return forward(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 775, in forward
        h = module(h, emb, context)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/openaimodel.py", line 100, in forward
        x = layer(x, context)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py", line 627, in forward
        x = block(x, context=context[i])
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py", line 459, in forward
        return checkpoint(
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/util.py", line 165, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/util.py", line 182, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "/Users/imperius_f/stable-diffusion-webui/repositories/generative-models/sgm/modules/attention.py", line 478, in _forward
        self.attn2(
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlmodel_ipadapter.py", line 465, in attn_forward_hacked
        out = out + f(self, x, q)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlmodel_ipadapter.py", line 667, in forward
        ip_k = self.call_ip(k_key, cond_uncond_image_emb, device=q.device)
      File "/Users/imperius_f/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlmodel_ipadapter.py", line 648, in call_ip
        ip = self.ipadapter.ip_layers.to_kvs[key](feat).to(device)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/modules/devices.py", line 164, in forward_wrapper
        result = self.org_forward(*args, **kwargs)
      File "/Users/imperius_f/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 500, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "/Users/imperius_f/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'

Additional information

No response

tengshaofeng commented 7 months ago

I think it is the torch is not cuda version