adieyal / sd-dynamic-prompts

A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
MIT License
1.96k stars 253 forks source link

Magic Prompts isn't working with DirectML #693

Closed benjylkl closed 3 months ago

benjylkl commented 7 months ago
*** Error running process: C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\extensions\sd-dynamic-prompts\scripts\dynamic_prompting.py
    Traceback (most recent call last):
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\modules\scripts.py", line 619, in process
        script.process(p, *script_args)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\extensions\sd-dynamic-prompts\sd_dynamic_prompts\dynamic_prompting.py", line 485, in process
        all_prompts, all_negative_prompts = generate_prompts(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\extensions\sd-dynamic-prompts\sd_dynamic_prompts\helpers.py", line 93, in generate_prompts
        all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [""]
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 164, in generate
        magic_prompts = self._generate_magic_prompts(prompts)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 210, in _generate_magic_prompts
        prompts = self._generator(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\pipelines\text_generation.py", line 201, in __call__
        return super().__call__(text_inputs, **kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\pipelines\base.py", line 1101, in __call__
        outputs = list(final_iterator)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\pipelines\pt_utils.py", line 124, in __next__
        item = next(self.iterator)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\pipelines\pt_utils.py", line 125, in __next__
        processed = self.infer(item, **self.params)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\pipelines\base.py", line 1026, in forward
        model_outputs = self._forward(model_inputs, **forward_params)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\pipelines\text_generation.py", line 263, in _forward
        generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\generation\utils.py", line 1572, in generate
        return self.sample(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\generation\utils.py", line 2619, in sample
        outputs = self(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1080, in forward
        transformer_outputs = self.transformer(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 903, in forward
        outputs = block(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 391, in forward
        attn_outputs = self.attn(
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 332, in forward
        attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
      File "C:\StableDiffusionStabilityMatrix-win-x64\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion Web UI\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 198, in _attn
        causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]
    RuntimeError: Cannot set version_counter for inference tensor
benjylkl commented 7 months ago

I am thinking is that the same problem of this. https://github.com/pytorch/pytorch/pull/95748

akx commented 7 months ago

Are you using e.g. DirectML or something otherwise exotic?

benjylkl commented 7 months ago

Are you using e.g. DirectML or something otherwise exotic?

Yes I am using DirectML, I think it maybe the reason?

subhead commented 6 months ago

I have the same problem using directml. Is there a way to use dynamic-prompt with directml?

subhead commented 6 months ago

I noticed that the error only appears if i tick the "Magic prompt" checkbox. If i only activate the "Dynamic prompt enabled" checkbox the error does not apperar.

benjylkl commented 6 months ago

I noticed that the error only appears if i tick the "Magic prompt" checkbox. If i only activate the "Dynamic prompt enabled" checkbox the error does not apperar.

I have the exact same behaviour.

akx commented 6 months ago

I would then say Magic Prompts is not compatible with DirectML. I don't have a DirectML machine, so I can't help more.

Workaround: don't use Magic Prompts with DirectML.

Drael64 commented 5 months ago

I have the same problem using directml. Is there a way to use dynamic-prompt with directml?

My experience is that wildcards works some of the time, and not others. It's odd behavior. The actual dynamic prompt format works all of the time for me. It's just wildcards that is hit and miss.