hako-mikan / sd-webui-negpip

Extension for Stable Diffusion web-ui enables negative prompt in prompt
GNU Affero General Public License v3.0
193 stars 15 forks source link

Error with fp8 weight option #38

Open cololy opened 4 months ago

cololy commented 4 months ago

I use a1111 v1.8.0 with fp8 weight option enabled, and an error occurred. "RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float8_e4m3fn"

Without fp8 weight option, the error did not occur. Without minus prompt, the error did not occur.

*** Error running process_batch: E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py
    Traceback (most recent call last):
      File "E:\SDXL\webui\stable-diffusion-webui\modules\scripts.py", line 808, in process_batch
        script.process_batch(p, *script_args, **kwargs)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 213, in process_batch
        self.conds_all = calcconds(nip)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 205, in calcconds
        conds, contokens = conddealer(targets)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions\sd-webui-negpip\scripts\negpip.py", line 179, in conddealer
        cond = prompt_parser.get_learned_conditioning(shared.sd_model,input,p.steps)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "E:\SDXL\webui\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 276, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "E:\SDXL\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 331, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 383, in forward
        hidden_states, attn_weights = self.self_attn(
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 272, in forward
        query_states = self.q_proj(hidden_states) * self.scale
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\SDXL\webui\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 500, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "E:\SDXL\webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float8_e4m3fn
cololy commented 3 months ago

For me, this fork resolve the issue. https://github.com/SLAPaper/sd-webui-negpip