adieyal / sd-dynamic-prompts

A custom script for AUTOMATIC1111/stable-diffusion-webui to implement a tiny template language for random prompt generation
MIT License
2.04k stars 262 forks source link

RuntimeError : Double (Float64)... Prompt Magic on wbui Directml #576

Open StudioDUzes opened 1 year ago

StudioDUzes commented 1 year ago

Already up to date. venv "N:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.1 Commit hash: 2c2ca1170bcb7bbd12eef4551b8a42ab16dbe5f7

Launching Web UI with arguments: --medvram --no-half --no-half-vae --precision full --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception '', memory monitor disabled Loading weights [d319cb2188] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\02-Semi-Realistic-sd15\babes_20.safetensors Creating model from config: N:\stable-diffusion-webui-directml\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 8.4s (launcher: 0.5s, import torch: 3.0s, import gradio: 1.1s, setup paths: 0.5s, other imports: 1.1s, opts onchange: 0.3s, load scripts: 0.9s, create ui: 0.6s, gradio launch: 0.2s). DiffusionWrapper has 859.52 M params. Loading VAE weights specified in settings: N:\stable-diffusion-webui-directml\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors Applying attention optimization: sub-quadratic... done. Model loaded in 2.2s (load weights from disk: 0.7s, create model: 0.4s, apply weights to model: 0.5s, load VAE: 0.2s, calculate empty prompt: 0.4s). First load of MagicPrompt may take a while. Error running process: N:\stable-diffusion-webui-directml\extensions\sd-dynamic-prompts\scripts\dynamic_prompting.py Traceback (most recent call last): File "N:\stable-diffusion-webui-directml\modules\scripts.py", line 519, in process script.process(p, script_args) File "N:\stable-diffusion-webui-directml\extensions\sd-dynamic-prompts\sd_dynamic_prompts\dynamic_prompting.py", line 482, in process all_prompts, all_negative_prompts = generate_prompts( File "N:\stable-diffusion-webui-directml\extensions\sd-dynamic-prompts\sd_dynamic_prompts\helpers.py", line 93, in generate_prompts all_prompts = prompt_generator.generate(prompt, num_prompts, seeds=seeds) or [""] File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 164, in generate magic_prompts = self._generate_magic_prompts(prompts) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\dynamicprompts\generators\magicprompt.py", line 210, in _generate_magic_prompts prompts = self._generator( File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\text_generation.py", line 202, in call return super().call(text_inputs, kwargs) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\base.py", line 1063, in call outputs = [output for output in final_iterator] File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\base.py", line 1063, in outputs = [output for output in final_iterator] File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\pt_utils.py", line 124, in next item = next(self.iterator) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\pt_utils.py", line 125, in next processed = self.infer(item, self.params) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\base.py", line 990, in forward model_outputs = self._forward(model_inputs, forward_params) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\pipelines\text_generation.py", line 244, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, generate_kwargs) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\generation\utils.py", line 1571, in generate return self.sample( File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\generation\utils.py", line 2534, in sample outputs = self( File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 1046, in forward transformer_outputs = self.transformer( File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 812, in forward attention_mask = (1.0 - attention_mask) torch.finfo(self.dtype).min File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch_tensor.py", line 40, in wrapped return f(args, kwargs) File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch_tensor.py", line 848, in rsub return _C._VariableFunctions.rsub(self, other) RuntimeError: The GPU device does not support Double (Float64) operations!


100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00, 1.25s/it] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:18<00:00, 1.10it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:18<00:00, 1.31it/s]

StudioDUzes commented 1 year ago

"I'm feeling lucky" work very well ???