lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 175 forks source link

[Bug]: Generating images doesn't work on RX 6650XT #360

Closed craxzK530 closed 5 months ago

craxzK530 commented 5 months ago

Checklist

What happened?

I tried generating on a fresh install and it just spat errors at me instead of generating an image.

Steps to reproduce the problem

  1. Press Generate button.

What should have happened?

WebUI should've generated an image.

What browsers do you use to access the UI ?

No response

Sysinfo

Internal Server Error (sorry couldn't download it)

Console logs

venv "D:\stable-diffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: d500e58a65d99bfaa9c7bb0da6c3eb5704fadf25
Installing requirements for CodeFormer
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Style database not found: D:\stable-diffusion\stable-diffusion-webui-directml\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Calculating sha256 for D:\stable-diffusion\stable-diffusion-webui-directml\models\Stable-diffusion\model.ckpt: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 957.0s (prepare environment: 927.0s, import torch: 11.1s, import gradio: 3.1s, setup paths: 3.7s, initialize shared: 1.1s, other imports: 4.8s, setup codeformer: 0.7s, load scripts: 3.2s, load upscalers: 0.1s, initialize extra networks: 0.2s, create ui: 0.8s, gradio launch: 0.8s).
cc6cb27103417325ff94f52b7a5d2dde45a7515b25c255d8e396c90014281516
Loading weights [cc6cb27103] from D:\stable-diffusion\stable-diffusion-webui-directml\models\Stable-diffusion\model.ckpt
Creating model from config: D:\stable-diffusion\stable-diffusion-webui-directml\configs\v1-inference.yaml
vocab.json: 100%|████████████████████████████████████████████████████████████████████| 961k/961k [00:00<00:00, 987kB/s]
merges.txt: 100%|████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 938kB/s]
special_tokens_map.json: 100%|████████████████████████████████████████████████████████████████| 389/389 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████| 905/905 [00:00<?, ?B/s]
config.json: 100%|████████████████████████████████████████████████████████████████████████| 4.52k/4.52k [00:00<?, ?B/s]
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "D:\Python\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "D:\Python\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "D:\Python\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\shared_items.py", line 128, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_models.py", line 576, in get_sd_model
    load_model()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_models.py", line 746, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_models.py", line 628, in get_empty_cond
    return sd_model.cond_stage_model([""])
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 273, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
    encoder_outputs = self.encoder(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
    layer_outputs = encoder_layer(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
    hidden_states = self.layer_norm1(hidden_states)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 531, in network_LayerNorm_forward
    return originals.LayerNorm_forward(self, input)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
    return F.layer_norm(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Stable diffusion model failed to load
Using already loaded model model.ckpt [cc6cb27103]: done in 0.0s
Traceback (most recent call last):
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\ui_extra_networks_textual_inversion.py", line 15, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 222, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
Traceback (most recent call last):
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 419, in pages_html
    return refresh()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\ui_extra_networks.py", line 425, in refresh
    pg.refresh()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\ui_extra_networks_textual_inversion.py", line 15, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 222, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
AttributeError: 'NoneType' object has no attribute 'cond_stage_model'
Exception in thread Thread-21 (load_model):
Traceback (most recent call last):
  File "D:\Python\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "D:\Python\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\initialize.py", line 153, in load_model
    devices.first_time_calculation()
  File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\devices.py", line 177, in first_time_calculation
    linear(x)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 486, in network_Linear_forward
    return originals.Linear_forward(self, input)
  File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
*** Error completing request
*** Arguments: ('task(w34neh1mzqozkwd)', 'bag of chips', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 64, 64, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000212544E9630>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\txt2img.py", line 64, in txt2img
        processed = processing.process_images(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 735, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 861, in process_images_inner
        p.setup_conds()
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 1312, in setup_conds
        super().setup_conds()
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
        hidden_states = self.layer_norm1(hidden_states)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 531, in network_LayerNorm_forward
        return originals.LayerNorm_forward(self, input)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
        return F.layer_norm(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
        return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

---
*** Error completing request
*** Arguments: ('task(z3cdpkwvbn92650)', 'bag of chips', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 64, 64, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000021253035B40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\txt2img.py", line 64, in txt2img
        processed = processing.process_images(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 735, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 861, in process_images_inner
        p.setup_conds()
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 1312, in setup_conds
        super().setup_conds()
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
        hidden_states = self.layer_norm1(hidden_states)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 531, in network_LayerNorm_forward
        return originals.LayerNorm_forward(self, input)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
        return F.layer_norm(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
        return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

---
*** Error completing request
*** Arguments: ('task(ogof7rtf1tq37zg)', 0, '', 'african american person dark skin', [], <PIL.Image.Image image mode=RGBA size=1000x1429 at 0x212544EBF40>, None, None, None, None, None, None, 20, 'Euler a', 4, 0, 1, 1, 1, 7, 1.5, 0.5, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000021253053EB0>, 0, False, '', 0.8, 2, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\img2img.py", line 247, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 735, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 808, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 1395, in init
        self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers.py", line 35, in create_sampler
        sampler = config.constructor(model)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 43, in <lambda>
        sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 89, in __init__
        self.model_wrap = self.model_wrap_cfg.inner_model
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 74, in inner_model
        self.model_wrap = denoiser(shared.sd_model, quantize=shared.opts.enable_quantization)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 135, in __init__
        super().__init__(model, model.alphas_cumprod, quantize=quantize)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 92, in __init__
        super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 48, in __init__
        self.register_buffer('log_sigmas', sigmas.log())
    RuntimeError: "log_vml_cpu" not implemented for 'Half'

---
*** Error completing request
*** Arguments: ('task(nqgm58oejoqhozo)', 1, '', 'african american person dark skin', [], <PIL.Image.Image image mode=RGBA size=1000x1429 at 0x212987DDD50>, <PIL.Image.Image image mode=RGB size=1000x1429 at 0x212987DE6E0>, None, None, None, None, None, 20, 'Euler a', 4, 0, 1, 1, 1, 7, 1.5, 0.5, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x0000021252A05840>, 0, False, '', 0.8, 2, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\img2img.py", line 247, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 735, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 808, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 1395, in init
        self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers.py", line 35, in create_sampler
        sampler = config.constructor(model)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 43, in <lambda>
        sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 89, in __init__
        self.model_wrap = self.model_wrap_cfg.inner_model
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 74, in inner_model
        self.model_wrap = denoiser(shared.sd_model, quantize=shared.opts.enable_quantization)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 135, in __init__
        super().__init__(model, model.alphas_cumprod, quantize=quantize)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 92, in __init__
        super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 48, in __init__
        self.register_buffer('log_sigmas', sigmas.log())
    RuntimeError: "log_vml_cpu" not implemented for 'Half'

---
*** Error completing request
*** Arguments: ('task(wjx6bmzhrra3clp)', 'bag of chips', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 64, 64, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000212544E88E0>, 0, False, '', 0.8, 605574566, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\txt2img.py", line 64, in txt2img
        processed = processing.process_images(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 735, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 861, in process_images_inner
        p.setup_conds()
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 1312, in setup_conds
        super().setup_conds()
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 469, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, total_steps, [self.cached_uc], self.extra_network_data)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\processing.py", line 455, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\prompt_parser.py", line 188, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
        return self.text_model(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 740, in forward
        encoder_outputs = self.encoder(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 654, in forward
        layer_outputs = encoder_layer(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 382, in forward
        hidden_states = self.layer_norm1(hidden_states)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 531, in network_LayerNorm_forward
        return originals.LayerNorm_forward(self, input)
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
        return F.layer_norm(
      File "D:\stable-diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
        return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Additional information

No response

lshqqytiger commented 5 months ago

Replace --skip-torch-cuda-test with --use-directml.

cmondev commented 4 months ago

Wow it's that easy. I'm using the same graphics card, used both arguments together and all I got was like the first run of image generation, just an image of noise. I think there was no warning that you shouldn't use both arguments together. Now it's working :)