lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.8k stars 186 forks source link

[Bug]: broken after enabled composable lora #250

Open Nsch11 opened 1 year ago

Nsch11 commented 1 year ago

Is there an existing issue for this?

What happened?

After enabled composable lora the following message appears

RuntimeError: tensor.device().type() == at::DeviceType::PrivateUse1 INTERNAL ASSERT FAILED at "D:\\a\\_work\\1\\s\\pytorch-directml-plugin\\torch_directml\\csrc\\dml\\DMLTensor.cpp":31, please report a bug to PyTorch. unbox expects Dml at::Tensor as inputs`

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

use two lora's at the same image

Version or Commit where the problem happens

9fcdca36ae9e4f5b17d5222e990e335827a707ea

What Python version are you running on ?

None

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

AMD GPUs

Cross attention optimization

Doggettx

What browsers do you use to access the UI ?

No response

Command Line Arguments

--medvram --always-batch-cond-uncond --precision full --no-half --no-half-vae --upcast-sampling --opt-sub-quad-attention --opt-split-attention --opt-split-attention-v1 --disable-nan-check --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --upcast-sampling --use-cpu interrogate gfpgan scunet codeformer

List of extensions

Extensions": [ { "name": "canvas-zoom", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\canvas-zoom", "version": "1bfb259f", "branch": "main", "remote": "https://github.com/richrobber2/canvas-zoom" }, { "name": "posex", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\posex", "version": "47ed0c4d", "branch": "master", "remote": "https://github.com/daswer123/posex" }, { "name": "sd-webui-controlnet", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\sd-webui-controlnet", "version": "eacfe995", "branch": "main", "remote": "https://github.com/Mikubill/sd-webui-controlnet.git" }, { "name": "sd-webui-loractl", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\sd-webui-loractl", "version": "fdaed0fe", "branch": "master", "remote": "https://github.com/cheald/sd-webui-loractl.git" }, { "name": "stable-diffusion-webui-composable-lora", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora", "version": "a03d40eb", "branch": "main", "remote": "https://github.com/a2569875/stable-diffusion-webui-composable-lora.git" }, { "name": "stable-diffusion-webui-images-browser", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-images-browser", "version": "a42c7a30", "branch": "main", "remote": "https://github.com/yfszzx/stable-diffusion-webui-images-browser" }, { "name": "stable-diffusion-webui-two-shot", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-two-shot", "version": "6b55dd52", "branch": "main", "remote": "https://github.com/ashen-sensored/stable-diffusion-webui-two-shot" }, { "name": "stable-diffusion-webui-wd14-tagger", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-wd14-tagger", "version": "99bf7d81", "branch": "master", "remote": "https://github.com/toriato/stable-diffusion-webui-wd14-tagger" }, { "name": "ultimate-upscale-for-automatic1111", "path": "C:\Users\1\stable-diffusion-webui-directml\extensions\ultimate-upscale-for-automatic1111", "version": "c99f382b", "branch": "master", "remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111" }

Console logs

Already up to date.
venv "C:\Users\1\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.2
Commit hash: 9fcdca36ae9e4f5b17d5222e990e335827a707ea

Launching Web UI with arguments: --medvram --always-batch-cond-uncond --precision full --no-half --no-half-vae --upcast-sampling --opt-sub-quad-attention --opt-split-attention --opt-split-attention-v1 --disable-nan-check --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --upcast-sampling --use-cpu interrogate gfpgan scunet codeformer
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
2023-08-27 22:18:10,026 - ControlNet - INFO - ControlNet v1.1.312
ControlNet preprocessor location: C:\Users\1\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2023-08-27 22:18:10,299 - ControlNet - INFO - ControlNet v1.1.312
Loading weights [7f96a1a9ca] from C:\Users\1\stable-diffusion-webui-directml\models\Stable-diffusion\AnythingV5_v5PrtRE.safetensors
Creating model from config: C:\Users\1\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 29.4s (launcher: 9.0s, import torch: 6.4s, import gradio: 2.7s, setup paths: 1.3s, other imports: 3.1s, list SD models: 0.3s, load scripts: 4.1s, create ui: 1.8s, gradio launch: 0.6s).
DiffusionWrapper has 859.52 M params.
Applying attention optimization: Doggettx... done.
Model loaded in 7.9s (load weights from disk: 2.3s, create model: 2.5s, apply weights to model: 2.2s, calculate empty prompt: 0.9s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:32<00:00,  1.61s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:27<00:00,  1.37s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [01:04<00:00,  3.20s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [01:00<00:00,  3.05s/it]
Composable LoRA load successful.███████████████████████████████████████████████████████| 20/20 [01:00<00:00,  3.03s/it]
  0%|                                                                                           | 0/20 [00:03<?, ?it/s]
*** Error completing request
*** Arguments: █████████████████████████████████████████████████████████████████████████████████, '(worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers:1.2)', [], 20, 0, False, False, 1, 1, 7, 2097739035.0, -1.0, 0, 0, 0, False, 480, 720, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', ['Clip skip: 2'], <gradio.routes.Request object at 0x000001C56C54BC40>, 0, False, '', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001C56C538E80>, True, False, True, False, False, False, False, False, False, True, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50) {}
    Traceback (most recent call last):
      File "C:\Users\1\stable-diffusion-webui-directml\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\1\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\txt2img.py", line 69, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\processing.py", line 680, in process_images
        res = process_images_inner(p)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\processing.py", line 797, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\processing.py", line 1057, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 464, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
        return func()
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 464, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 183, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 329, in forward
        x = self.proj_in(x)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora\composable_lora.py", line 510, in lora_Conv2d_forward
        res = lora_forward(self, input, res)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora\composable_lora.py", line 77, in lora_forward
        patch = composable_lycoris.get_lora_patch(module, input, res, lora_layer_name)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora\composable_lycoris.py", line 114, in get_lora_patch
        return get_lora_inference(converted_module, input)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora\composable_lycoris.py", line 78, in get_lora_inference
        return module.inference(input)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora\composable_lycoris.py", line 237, in inference
        return self.up_model(self.down_model(x))
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\1\stable-diffusion-webui-directml\extensions\stable-diffusion-webui-composable-lora\composable_lora.py", line 509, in lora_Conv2d_forward
        res = torch.nn.Conv2d_forward_before_lora(self, input)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
        return self._conv_forward(input, self.weight, self.bias)
      File "C:\Users\1\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
      File "C:\Users\1\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
      File "C:\Users\1\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 13, in forward
        return op(*args, **kwargs)
    RuntimeError: tensor.device().type() == at::DeviceType::PrivateUse1 INTERNAL ASSERT FAILED at "D:\\a\\_work\\1\\s\\pytorch-directml-plugin\\torch_directml\\csrc\\dml\\DMLTensor.cpp":31, please report a bug to PyTorch. unbox expects Dml at::Tensor as inputs

---

Additional information

No response