lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.68k stars 175 forks source link

[Bug]: Using token merge throws an error: RuntimeError: unknown error #486

Closed Freda-Chan closed 21 hours ago

Freda-Chan commented 4 days ago

Checklist

What happened?

Webui cannot generate image with token merge optimization and gives an error: RuntimeError: unknown error

Steps to reproduce the problem

Go to optimizations setting tab and increase token merging ratio.

What should have happened?

Token merge should work without any issue.

What browsers do you use to access the UI ?

Mozilla Firefox, Brave, Other

Sysinfo

sysinfo-2024-07-05-06-48.json

Console logs

venv "F:\sd\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3-amd-28-g371f53ed
Commit hash: 371f53ed7c926f9048ef95f45bc816cfbf37b564
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --medvram --use-directml
F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\diffusers\models\vq_model.py:20: FutureWarning: `VQEncoderOutput` is deprecated and will be removed in version 0.31. Importing `VQEncoderOutput` from `diffusers.models.vq_model` is deprecated and this will be removed in a future version. Please use `from diffusers.models.autoencoders.vq_model import VQEncoderOutput`, instead.
  deprecate("VQEncoderOutput", "0.31", deprecation_message)
F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\diffusers\models\vq_model.py:25: FutureWarning: `VQModel` is deprecated and will be removed in version 0.31. Importing `VQModel` from `diffusers.models.vq_model` is deprecated and this will be removed in a future version. Please use `from diffusers.models.autoencoders.vq_model import VQModel`, instead.
  deprecate("VQModel", "0.31", deprecation_message)
ONNX: version=1.18.1 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Loading weights [6ce0161689] from F:\sd\stable-diffusion-webui-amdgpu\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: F:\sd\stable-diffusion-webui-amdgpu\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Startup time: 20.2s (prepare environment: 28.3s, initialize shared: 2.7s, other imports: 0.1s, load scripts: 1.2s, create ui: 0.8s, gradio launch: 0.9s).
Applying attention optimization: Doggettx... done.
Model loaded in 15.8s (load weights from disk: 1.2s, create model: 1.9s, apply weights to model: 8.8s, apply half(): 3.5s, calculate empty prompt: 0.4s).
  0%|                                                                               | 0/20 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(syv739hnfewvq91)', <gradio.routes.Request object at 0x000001C32C150A30>, 'A parrot', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\processing.py", line 847, in process_images
        res = process_images_inner(p)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\processing.py", line 1075, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\processing.py", line 1422, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 221, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_samplers_kdiffusion.py", line 221, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_samplers_cfg_denoiser.py", line 243, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(subscript_cond(cond_in, a, b), image_cond_in[a:b]))
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1582, in _call_impl
        result = forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1582, in _call_impl
        result = forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\torch\autograd\function.py", line 598, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "F:\sd\stable-diffusion-webui-amdgpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\tomesd\patch.py", line 59, in _forward
        m_a, m_c, m_m, u_a, u_c, u_m = compute_merge(x, self._tome_info)
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\tomesd\patch.py", line 31, in compute_merge
        m, u = merge.bipartite_soft_matching_random2d(x, w, h, args["sx"], args["sy"], r,
      File "F:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\tomesd\merge.py", line 97, in bipartite_soft_matching_random2d
        dst_idx = gather(node_idx[..., None], dim=-2, index=src_idx)
    RuntimeError: unknown error

---

Additional information

No response

lshqqytiger commented 3 days ago

DirectML does not support gather. Also, DirectML does not support scatter with partially modified dimensions. Please use ZLUDA if you are using navi card.