[X] I have searched the existing issues and checked the recent builds/commits
What happened?
Processing but at the end...
Steps to reproduce the problem
Go to img2img
Press Interrogate CLIP
What should have happened?
work
Version or Commit where the problem happens
Version: ## 1.4.1
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Other GPUs
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --medvram --precision full --no-half --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check
git pull
call webui.bat
List of extensions
NON
Console logs
Already up to date.
venv "N:\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: ## 1.4.1
Commit hash: 097f9a096e84dd15cf75fdc0e8f0c763df256956
Installing requirements
Launching Web UI with arguments: --medvram --precision full --no-half --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check
No module 'xformers'. Proceeding without it.
Warning: caught exception '', memory monitor disabled
*** "Disable all extensions" option was set, will only load built-in extensions ***
Loading weights [ab0de91e50] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\Realistic sd15\Realistic_Vision\Realistic_Vision_V4.0.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 6.5s (import torch: 1.7s, import gradio: 1.1s, import ldm: 0.4s, other imports: 1.3s, opts onchange: 0.5s, load scripts: 0.9s, create ui: 0.5s, gradio launch: 0.2s).
Creating model from config: N:\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: N:\stable-diffusion-webui-directml\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sub-quadratic... done.
Textual inversion embeddings loaded(6): bad_prompt_version2-neg, kkw-bp, kkw-Demons, kkw-micro, ng_deepnegative_v1_75t, rmadanegative4_sd15-neg
preload_extensions_git_metadata for 25 extensions took 2.46s
Model loaded in 3.0s (load weights from disk: 0.7s, create model: 0.7s, apply weights to model: 1.1s, load VAE: 0.2s, calculate empty prompt: 0.4s).
load checkpoint from N:\stable-diffusion-webui-directml\models\BLIP\model_base_caption_capfilt_large.pth
*** Error interrogating
Traceback (most recent call last):
File "N:\stable-diffusion-webui-directml\modules\interrogate.py", line 196, in interrogate
caption = self.generate_caption(pil_image)
File "N:\stable-diffusion-webui-directml\modules\interrogate.py", line 181, in generate_caption
caption = self.blip_model.generate(gpu_image, sample=False, num_beams=shared.opts.interrogate_clip_num_beams, min_length=shared.opts.interrogate_clip_min_length, max_length=shared.opts.interrogate_clip_max_length)
File "N:\stable-diffusion-webui-directml\repositories\BLIP\models\blip.py", line 156, in generate
outputs = self.text_decoder.generate(input_ids=input_ids,
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\generation\utils.py", line 1518, in generate
return self.greedy_search(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\generation\utils.py", line 2267, in greedy_search
unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)
RuntimeError: new(): expected key in DispatchKeySet(CPU, CUDA, HIP, XLA, MPS, IPU, XPU, HPU, Lazy, Meta) but got: PrivateUse1
---
Should be fixed in 8ea37a924696435e1d777a2d5ab2909d64ce8975.
I have heard that PyTorch already solved this error, but it still exists in the latest release.
Is there an existing issue for this?
What happened?
Processing but at the end...
Steps to reproduce the problem
What should have happened?
work
Version or Commit where the problem happens
Version: ## 1.4.1
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Other GPUs
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
NON
Console logs
Additional information
GPU Intel Arc a770 + AMD ryzen 7 3700x
thank you for your work...