0xbitches / sd-webui-lcm

Latent Consistency Model for AUTOMATIC1111 Stable Diffusion WebUI
MIT License
614 stars 43 forks source link

RuntimeError: mat1 and mat2 must have the same dtype #31

Open Klaster1 opened 8 months ago

Klaster1 commented 8 months ago

Installed the extension on the latest "stable-diffusion-webui-directml". On my first attempt to txt2img, got this error, but after following this suggestion (like so), I'm now getting:

Weights loaded in 3.0s (load weights from disk: 0.5s, apply weights to model: 2.4s).
The config attributes {'requires_safety_checker': True} were passed to LatentConsistencyModelPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
Keyword arguments {'requires_safety_checker': True} are not expected by LatentConsistencyModelPipeline and will be ignored.
Traceback (most recent call last):
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\extensions\sd-webui-lcm\scripts\main.py", line 112, in generate
    result = pipe(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 195, in __call__
    prompt_embeds = self._encode_prompt(
  File "C:\dev\stable-diffusion-webui-directml\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 98, in _encode_prompt
    prompt_embeds = self.text_encoder(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
    encoder_outputs = self.encoder(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
    layer_outputs = encoder_layer(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 383, in forward
    hidden_states, attn_weights = self.self_attn(
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 272, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\dev\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
    return originals.Linear_forward(self, input)
  File "C:\dev\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
  File "C:\dev\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
  File "C:\dev\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 13, in forward
    return op(*args, **kwargs)
RuntimeError: mat1 and mat2 must have the same dtype

Did I do something wrong? Any chance sd-webui-lcm isn't compatible with directml and AMD on Windows?