lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.69k stars 178 forks source link

[Bug]: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' + "log_vml_cpu" not implemented for 'Half' #262

Closed Kamael-cs closed 10 months ago

Kamael-cs commented 10 months ago

Is there an existing issue for this?

What happened?

In text to img, or img to img mode, everytime I hit run, it will say RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' + "log_vml_cpu" not implemented for 'Half' respectively for two modes.

Steps to reproduce the problem

I installed the bat file after the first step of the "git clone" command, and the bat file installation is pretty much successful except for these two errors shown in descriptions. Seems like there were some problems with the 'Half' file that doesn't have sub folders created. the

What should have happened?

It should work as normal, generating images instead of showing these two errors when I launch it.

Sysinfo

Win 11, DDR4, intel i5 13600k, AMD RX 6950xt 16GB, no bottleneck of power supply.

What browsers do you use to access the UI ?

Google Chrome

Console logs

To create a public link, set `share=True` in `launch()`.
Startup time: 0.6s (load scripts: 0.2s, create ui: 0.3s, gradio launch: 0.1s).
*** Error completing request
*** Arguments: ('task(uyljfqgjy10q79z)', 'white men', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000002BBAF34FF40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 856, in process_images_inner
        p.setup_conds()
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 1309, in setup_conds
        super().setup_conds()
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\prompt_parser.py", line 189, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
        hidden_states = self.layer_norm1(hidden_states)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 474, in network_LayerNorm_forward
        return originals.LayerNorm_forward(self, input)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
        return F.layer_norm(
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
        return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

---
*** Error completing request
*** Arguments: ('task(2b4df2s13thzou1)', 0, 'white men', '', [], None, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000002BC724F72B0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\img2img.py", line 208, in img2img
        processed = process_images(p)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 803, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\processing.py", line 1392, in init
        self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_samplers.py", line 35, in create_sampler
        sampler = config.constructor(model)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 43, in <lambda>
        sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 89, in __init__
        self.model_wrap = self.model_wrap_cfg.inner_model
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 74, in inner_model
        self.model_wrap = denoiser(shared.sd_model, quantize=shared.opts.enable_quantization)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 135, in __init__
        super().__init__(model, model.alphas_cumprod, quantize=quantize)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 92, in __init__
        super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize)
      File "C:\Users\Kamael\sd\webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 48, in __init__
        self.register_buffer('log_sigmas', sigmas.log())
    RuntimeError: "log_vml_cpu" not implemented for 'Half'

---

Additional information

It's a long code, but essentially it's just the two errors that I mentioned in the title of the problem.

Kamael-cs commented 10 months ago

Found out the solution myself, just apply the " --no-half" command to the bat file and that's it. However, it by doing this, mine still had another problem of GPU not being used at all during generating pictures.

lshqqytiger commented 10 months ago

Did you clone upstream repository? Clone lshqqytiger/stable-diffusion-webui-directml, not AUTOMATIC1111/stable-diffusion-webui and try again.

gef3dx commented 10 months ago

I installed everything as in the instructions, this error pops up RuntimeError: "log_vml_cpu" not implemented for 'Half' after adding --no-half the problem goes away but the GPU does not work, the CPU works. How to solve this problem. I clone lshqqytiger/stable-diffusion-webui-directml My GPU GigaByte AMD Radeon rx 580 8gb My commands: COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch --medvram --precision full --no-half --precision full --opt-split-attention-v1 - -theme dark

lshqqytiger commented 10 months ago

I installed everything as in the instructions, this error pops up RuntimeError: "log_vml_cpu" not implemented for 'Half' after adding --no-half the problem goes away but the GPU does not work, the CPU works. How to solve this problem. I clone lshqqytiger/stable-diffusion-webui-directml My GPU GigaByte AMD Radeon rx 580 8gb My commands: COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check --autolaunch --medvram --precision full --no-half --precision full --opt-split-attention-v1 - -theme dark

Try this solution: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/270#issuecomment-1712846648