AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
138.31k stars 26.28k forks source link

[Bug]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm) #11080

Open yesbroc opened 1 year ago

yesbroc commented 1 year ago

Is there an existing issue for this?

What happened?

using cpu for controlnet and gpu for everything else (bc lowvram on cn just doesnt do it lol), from what i understant webui cant handle 2 devices. (this is in img2img, could happen to txt2img too)

Steps to reproduce the problem

  1. --reinstall-xformers --xformers --lowvram --update-all-extensions --always-batch-cond-uncond --api --use-cpu controlnet –opt-split-attention-v1

  2. launch img2img

  3. use with depth cn

What should have happened?

it shouldve just worked ig

Commit where the problem happens

https://github.com/AUTOMATIC1111/stable-diffusion-webui

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

CPU, Other GPUs

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--reinstall-xformers --xformers --lowvram --update-all-extensions --always-batch-cond-uncond --api --use-cpu controlnet –opt-split-attention-v1

List of extensions

image

Console logs

venv "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar  1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)]
Version: v1.3.2
Commit hash: baf6946e06249c5af9851c60171692c44ef633e0
Installing xformers
Collecting xformers==0.0.17
  Using cached xformers-0.0.17-cp310-cp310-win_amd64.whl (112.6 MB)
Installing collected packages: xformers
Successfully installed xformers-0.0.17

[notice] A new release of pip available: 22.3.1 -> 23.1.2
[notice] To update, run: C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip
Installing requirements

Fetching updates for midas...
Checking out commit for midas with hash: 1645b7e...

Installing requirements 1 for Infinite-Zoom

Installing sd-webui-infinite-image-browsing requirement: python-dotenv
Installing sd-webui-infinite-image-browsing requirement: Pillow

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\a1111-stable-diffusion-webui-vram-estimator':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\clip-interrogator-ext':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\depth-image-io-for-SDWebui':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\depthmap2mask':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\infinite-zoom-automatic1111-webui':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\OneButtonPrompt':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\openOutpaint-webUI-extension':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\openOutpaint-webUI-extension\app':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\openpose-editor':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\PBRemTools':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\SD-CN-Animation':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-dynamic-thresholding':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-extension-steps-animation':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-3d-open-pose-editor':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-aspect-ratio-helper':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-infinite-image-browsing':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-llul':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-model-converter':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd_dreambooth_extension':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-cafe-aesthetic':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-distributed':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-eyemask':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-inspiration':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-Prompt_Generator':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-rembg':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-state':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\stable-diffusion-webui-two-shot':
Already up to date.

Pulled changes for repository in 'C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111':
Already up to date.

Launching Web UI with arguments: --reinstall-xformers --xformers --lowvram --update-all-extensions --always-batch-cond-uncond --api --use-cpu interrogate, controlnet ΓÇôopt-split-attention-v1
2023-06-08 00:18:23,437 - ControlNet - INFO - ControlNet v1.1.219
ControlNet preprocessor location: C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-06-08 00:18:23,822 - ControlNet - INFO - ControlNet v1.1.219
Loading weights [06587e514e] from C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\models\Stable-diffusion\dreamshaper_6Inpainting.safetensors
[VRAMEstimator] Loaded benchmark data.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 31.9s (import torch: 5.9s, import gradio: 1.5s, import ldm: 0.6s, other imports: 2.1s, setup codeformer: 0.2s, load scripts: 17.9s, create ui: 3.1s, gradio launch: 0.3s, scripts app_started_callback: 0.1s).
Creating model from config: C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\configs\v1-inpainting-inference.yaml
LatentInpaintDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.54 M params.
Loading VAE weights specified in settings: C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying optimization: xformers... done.
Textual inversion embeddings loaded(8): (nsfw embeddings), EasyNegative
Textual inversion embeddings skipped(3): CGI_Animation, midjourney, negmutation-200
Model loaded in 20.6s (load weights from disk: 9.8s, create model: 1.2s, apply weights to model: 3.0s, apply half(): 2.9s, load VAE: 3.4s, load textual inversion embeddings: 0.2s).
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\utils.py", line 109, in svg_preprocess
    return preprocess(inputs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1171, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1826, in preprocess
    im = processing_utils.decode_base64_to_image(x)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\gradio\processing_utils.py", line 53, in decode_base64_to_image
    exif = img.getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\PngImagePlugin.py", line 1028, in getexif
    return super().getexif()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 1455, in getexif
    self._exif.load(exif_info)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3719, in load
    self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\PIL\TiffImagePlugin.py", line 507, in __init__
    raise SyntaxError(msg)
SyntaxError: not a TIFF file (header b"b'Exif\\x" not valid)
Error completing request
Arguments: ('', [], True, -1) {}
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\ui_common.py", line 46, in save_files
    data = json.loads(js_data)
  File "C:\Users\orijp\anaconda3\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\orijp\anaconda3\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\orijp\anaconda3\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

2023-06-08 00:20:16,134 - ControlNet - INFO - Loading model: control_v11f1p_sd15_depth [cfd03158]
2023-06-08 00:20:18,754 - ControlNet - INFO - Loaded state_dict from [C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\models\ControlNet\control_v11f1p_sd15_depth.pth]
2023-06-08 00:20:18,755 - ControlNet - INFO - Loading config: C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_v11f1p_sd15_depth.yaml
2023-06-08 00:20:22,038 - ControlNet - INFO - ControlNet model control_v11f1p_sd15_depth [cfd03158] loaded.
2023-06-08 00:20:22,169 - ControlNet - INFO - Loading preprocessor: depth
2023-06-08 00:20:22,169 - ControlNet - INFO - preprocessor resolution = 512
  0%|                                                                                          | 0/19 [00:01<?, ?it/s]
Error completing request
Arguments: ('task(mvuj3jxliv4t4oo)', 0, 'robotic armour piece, chest, scifi', 'EasyNegative, worst quality, (low quality:1.5), medium quality, deleted, (lowres:1.2), (bad anatomy:1.4), (bad hands:1.3), text, error, missing fingers, extra digit, fewer digits, (cropped:1.2), jpeg artifacts, signature, (watermark:1.2), username, blurry, less than 5 fingers, more than 5 fingers, bad hands, bad hand anatomy, missing fingers, extra fingers, mutated hands, disfigured hands, deformed hands, (double eyebrows:1.3), deformed lips, bad teeth, deformed teeth, (multiple tails:1.1), naked, nsfw, framing error, (bad framing:1.3), (disfigured teeth:1.6), (ugly teeth:1.4), clothes', [], <PIL.Image.Image image mode=RGBA size=1508x2676 at 0x267F83723B0>, None, None, None, None, None, None, 25, 16, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.73, -1.0, -1.0, 0, 0, 0, False, 0, 768, 512, 1, 1, 0, 32, 0, '', '', '', [], 0, '\n    <div style="padding: 10px">\n      <div>Estimated VRAM usage: <span style="color: rgb(255.00, 0.01, 197.40)">4008.47 MB / 4096 MB (97.86%)</span></div>\n      <div>(1546 MB system + 2238.61 MB used)</div>\n    </div>\n    ', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000266CCAD8FD0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002698AAA22F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002698AAA2290>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000267F8372C20>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, True, None, None, False, False, 0, True, 384, 384, False, 4, True, True, False, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, True, 0, '', '', 20, True, 20, True, 4, 0.4, 7, 512, 512, True, 88, False, 'None', '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\img2img.py", line 178, in img2img
    processed = process_images(p)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 295, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 1261, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 156, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1339, in forward
    out = self.diffusion_model(xc, t, context=cc)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 630, in forward_webui
    return forward(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 414, in forward
    control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 99, in forward
    return self.control_model(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 351, in forward
    emb = self.time_embed(t_emb)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
    input = module(input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 400, in lora_Linear_forward
    return torch.nn.Linear_forward_before_lora(self, input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris\lycoris.py", line 741, in lyco_Linear_forward
    return torch.nn.Linear_forward_before_lyco(self, input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

2023-06-08 00:21:34,472 - ControlNet - INFO - Preview Resolution = 512
2023-06-08 00:22:27,710 - ControlNet - INFO - Preview Resolution = 512
2023-06-08 00:23:32,639 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-06-08 00:23:32,641 - ControlNet - INFO - Loading preprocessor: none
2023-06-08 00:23:32,641 - ControlNet - INFO - preprocessor resolution = 512
  0%|                                                                                          | 0/19 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(b2t1xc5vs4gqh5c)', 0, 'robotic armour piece, chest, scifi', 'EasyNegative, worst quality, (low quality:1.5), medium quality, deleted, (lowres:1.2), (bad anatomy:1.4), (bad hands:1.3), text, error, missing fingers, extra digit, fewer digits, (cropped:1.2), jpeg artifacts, signature, (watermark:1.2), username, blurry, less than 5 fingers, more than 5 fingers, bad hands, bad hand anatomy, missing fingers, extra fingers, mutated hands, disfigured hands, deformed hands, (double eyebrows:1.3), deformed lips, bad teeth, deformed teeth, (multiple tails:1.1), naked, nsfw, framing error, (bad framing:1.3), (disfigured teeth:1.6), (ugly teeth:1.4), clothes', [], <PIL.Image.Image image mode=RGBA size=1508x2676 at 0x267E6CD5420>, None, None, None, None, None, None, 25, 16, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.73, -1.0, -1.0, 0, 0, 0, False, 0, 768, 512, 1, 1, 0, 32, 0, '', '', '', [], 0, '\n    <div style="padding: 10px">\n      <div>Estimated VRAM usage: <span style="color: rgb(255.00, 0.01, 197.40)">4008.47 MB / 4096 MB (97.86%)</span></div>\n      <div>(1546 MB system + 2238.61 MB used)</div>\n    </div>\n    ', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000266CCA95900>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002698AAA1240>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002698AAA0430>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002698AAA0460>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, True, None, None, False, False, 0, True, 384, 384, False, 4, True, True, False, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, True, 0, '', '', 20, True, 20, True, 4, 0.4, 7, 512, 512, True, 88, False, 'None', '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\img2img.py", line 178, in img2img
    processed = process_images(p)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 295, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 1261, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 156, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1339, in forward
    out = self.diffusion_model(xc, t, context=cc)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 630, in forward_webui
    return forward(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 414, in forward
    control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 99, in forward
    return self.control_model(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 351, in forward
    emb = self.time_embed(t_emb)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
    input = module(input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 400, in lora_Linear_forward
    return torch.nn.Linear_forward_before_lora(self, input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris\lycoris.py", line 741, in lyco_Linear_forward
    return torch.nn.Linear_forward_before_lyco(self, input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

2023-06-08 00:24:53,406 - ControlNet - INFO - Loading model from cache: control_v11f1p_sd15_depth [cfd03158]
2023-06-08 00:24:53,415 - ControlNet - INFO - Loading preprocessor: depth_leres++
2023-06-08 00:24:53,415 - ControlNet - INFO - preprocessor resolution = 512
  0%|                                                                                          | 0/19 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(8o9vvw8dra4rzll)', 0, 'robotic armour piece, chest, scifi', 'EasyNegative, worst quality, (low quality:1.5), medium quality, deleted, (lowres:1.2), (bad anatomy:1.4), (bad hands:1.3), text, error, missing fingers, extra digit, fewer digits, (cropped:1.2), jpeg artifacts, signature, (watermark:1.2), username, blurry, less than 5 fingers, more than 5 fingers, bad hands, bad hand anatomy, missing fingers, extra fingers, mutated hands, disfigured hands, deformed hands, (double eyebrows:1.3), deformed lips, bad teeth, deformed teeth, (multiple tails:1.1), naked, nsfw, framing error, (bad framing:1.3), (disfigured teeth:1.6), (ugly teeth:1.4), clothes', [], <PIL.Image.Image image mode=RGBA size=1508x2676 at 0x268704981C0>, None, None, None, None, None, None, 25, 16, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.73, -1.0, -1.0, 0, 0, 0, False, 0, 768, 512, 1, 1, 0, 32, 0, '', '', '', [], 0, '\n    <div style="padding: 10px">\n      <div>Estimated VRAM usage: <span style="color: rgb(255.00, 0.01, 197.40)">4008.47 MB / 4096 MB (97.86%)</span></div>\n      <div>(1546 MB system + 2238.61 MB used)</div>\n    </div>\n    ', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x00000268704986A0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000026870499870>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002687049AB00>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000026870499E10>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, True, None, None, False, False, 0, True, 384, 384, False, 4, True, True, False, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, True, 0, '', '', 20, True, 20, True, 4, 0.4, 7, 512, 512, True, 88, False, 'None', '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\img2img.py", line 178, in img2img
    processed = process_images(p)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 295, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\processing.py", line 1261, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 156, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1339, in forward
    out = self.diffusion_model(xc, t, context=cc)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 630, in forward_webui
    return forward(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 414, in forward
    control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 99, in forward
    return self.control_model(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 351, in forward
    emb = self.time_embed(t_emb)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
    input = module(input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 400, in lora_Linear_forward
    return torch.nn.Linear_forward_before_lora(self, input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\extensions\a1111-sd-webui-lycoris\lycoris.py", line 741, in lyco_Linear_forward
    return torch.nn.Linear_forward_before_lyco(self, input)
  File "C:\Users\orijp\OneDrive\Desktop\chatgpts\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

Additional information

No response

pakresi commented 1 year ago

absolutely related with: Version: v1.3.2 version: v1.3.0 not have same

@AUTOMATIC1111 AUTOMATIC1111 said v.1.3.2 "fix postprocessing overwriting parameters"

i was have same. i just worked a few hours to understand this: SyntaxError: not a TIFF file (header b"b'Exif\x" not valid)

for example: In extras tab, if your script creating new image and upload it to pnginfo tab. you will see same error.

it must be related with this change : https://github.com/AUTOMATIC1111/stable-diffusion-webui/compare/v1.3.0...v1.3.2#diff-7035c0b11c034183a4d5570fccaf498d0e1ceea6f1a70efffab2bf963703739a (modules/images.py) updates, gradio not adaptable with them to detect png image

i solved my problem in extras tab with copying original info to return:

prev_image_info = pp.image.info
...
(my code creates newimage)
...
pp.image = newimage
pp.image.info = prev_image_info  (if you not have this its creating header b"b'Exif\\x" not valid error)
return pp.image

more easy solution: use v1.3.0

yesbroc commented 1 year ago

i got around the tiff thing by using a duplicate backup photo, what im really asking about is why using cpu for controlnet and cuda for sd wont work

eek168 commented 10 months ago

I was getting this issue too after using --medvram-sdxl. Was able to fix by running once with --reinstall-torch (in addition to all my other arguments). Then closed, removed --reinstall-torch, and could rerun without getting the error.

Vectorrent commented 8 months ago

I was running into the same error, while trying to use my CPU for both the model and ControlNet. I was able to fix it by preventing torch from seeing my GPU at all:

export CUDA_VISIBLE_DEVICES=""