lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.79k stars 185 forks source link

[Bug]: Stable Diffusion XL not work Textual Inversion #464

Closed EygenCat closed 4 months ago

EygenCat commented 4 months ago

Checklist

What happened?

If when using Textual Inversion and Stable Diffusion XL, then image generation does not work with the error "RuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and query.dtype: struct c10::Half instead." Снимок экрана 2024-05-18 195310

Steps to reproduce the problem

Add Textual Inversion XL to the negative space for Stable Diffusion XL, click the generate button

What should have happened?

We need to get the image

What browsers do you use to access the UI ?

Microsoft Edge

Sysinfo

sysinfo-2024-05-18-16-56.json

Console logs

venv "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.3-amd-13-g517aaaff
Commit hash: 517aaaff2bb1a512057d88b0284193b9f23c0b47
Skipping onnxruntime installation.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --disable-nan-check --precision autocast --autolaunch --use-directml --skip-ort --listen
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
CivitAI Browser+: Aria2 RPC started
Loading weights [67ab2fd8ec] from H:\IL\amd14052024\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Creating model from config: H:\IL\amd14052024\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 20.1s (prepare environment: 17.0s, initialize shared: 1.4s, other imports: 0.3s, load scripts: 4.4s, create ui: 0.8s, gradio launch: 4.3s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 8.2s (load weights from disk: 0.4s, create model: 0.6s, apply weights to model: 6.6s, move model to device: 0.1s, calculate empty prompt: 0.2s).
*** Error completing request
*** Arguments: ('task(0tiwmpdqsaeprxy)', <gradio.routes.Request object at 0x000001A32F75E7A0>, '', ' aidxlv05_neg', [],1, 1, 9, 624, 480, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0,False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\processing.py", line 847, in process_images
        res = process_images_inner(p)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\processing.py", line 1053, in process_images_inner
        p.setup_conds()
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\processing.py", line 1589, in setup_conds
        super().setup_conds()
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\processing.py", line 507, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\processing.py", line 493, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\sd_models_xl.py", line 32, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\sd_hijack_clip.py", line 276, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\modules\sd_hijack_open_clip.py", line 57, in encode_with_transformers
        d = self.wrapped.encode_with_transformer(tokens)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 470, in encode_with_transformer
        x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 502, in text_transformer_forward
        x = r(x, attn_mask=attn_mask)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\open_clip\transformer.py", line 242,in forward
        x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\open_clip\transformer.py", line 228,in attention
        return self.attn(
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\extensions-builtin\Lora\networks.py", line 560, in network_MultiheadAttention_forward
        return originals.MultiheadAttention_forward(self, *args, **kwargs)
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\activation.py", line 1189, in forward
        attn_output, attn_output_weights = F.multi_head_attention_forward(
      File "H:\IL\amd14052024\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\functional.py", line 5334, in multi_head_attention_forward
        attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
    RuntimeError: Expected attn_mask dtype to be bool or to match query dtype, but got attn_mask.dtype: float and  query.dtype: struct c10::Half instead.

---

Additional information

CPU: 5600X 32 GB RAM GPU: RX 6700 xt

https://civitai.com/models/118418/negativexl https://civitai.com/models/257749/pony-diffusion-v6-xl

EygenCat commented 4 months ago

Im using ZLUDA, Textual Inversion works

AvailableHost commented 3 months ago

Im using ZLUDA, Textual Inversion works

do you have your guide for installing?

EygenCat commented 3 months ago

Im using ZLUDA, Textual Inversion works

do you have your guide for installing?

From here, read only the following points ZLUDA: Install Visual C++ Runtime, Install HIP SDK. You don't need anything else from there. In the file webui-user.net Fix it set COMMANDLINE_ARGS= --autolaunch --use-zluda. You need to install everything from the beginning