Open StudioDUzes opened 11 months ago
I can't reproduce the problem due to a lack of DirectML test environment. Fixing it may be challenging since segment-anything is an external package.
Intel Arc a770 16go does not support Double (Float64) operations... I'm not a programmer, but why double (float 64) and not something to be compatible with more hardware... I ask this silly question because a lot of extensions present this problem.
https://github.com/adieyal/sd-dynamic-prompts/issues/576
Very good extension... work with "--use-cpu all" or Rundiffusion
Although I don't have a test environment for DirectML, I've implemented countermeasures in areas where variables appear to be using float64. Please update your repository to the latest version and give it a try.
Already up to date. venv "N:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.1 Commit hash: 2c2ca1170bcb7bbd12eef4551b8a42ab16dbe5f7
Launching Web UI with arguments: --medvram --no-half --no-half-vae --precision full --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception '', memory monitor disabled Loading weights [e6415c4892] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\00-RD\Realistic-sd15\Realistic_Vision_V2.0.safetensors Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 9.2s (launcher: 0.5s, import torch: 2.9s, import gradio: 1.1s, setup paths: 0.5s, other imports: 1.1s, opts onchange: 0.3s, load scripts: 1.6s, create ui: 0.8s, gradio launch: 0.2s).
Creating model from config: N:\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: N:\stable-diffusion-webui-directml\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sub-quadratic... done.
Model loaded in 2.5s (load weights from disk: 1.1s, create model: 0.4s, apply weights to model: 0.5s, load VAE: 0.1s, calculate empty prompt: 0.4s).
100%|███████████████████████████████████████████████████████████████████████████████| 358M/358M [00:04<00:00, 92.6MB/s]
2023-07-29 07:05:15,458 - Inpaint Anything - INFO - resize by padding: (512, 512) -> (512, 512)
Unloaded weights 0.0s.
2023-07-29 07:05:23,428 - Inpaint Anything - INFO - input_image: (512, 512, 3) uint8
2023-07-29 07:05:24,105 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_b_01ec64.pth
2023-07-29 07:05:29,472 - Inpaint Anything - ERROR - The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
Loading weights [e6415c4892] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\00-RD\Realistic-sd15\Realistic_Vision_V2.0.safetensors
Creating model from config: N:\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: N:\stable-diffusion-webui-directml\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sub-quadratic... done.
Model loaded in 1.3s (create model: 0.4s, apply weights to model: 0.5s, load VAE: 0.2s, calculate empty prompt: 0.2s).
Thank you for giving it a try. I see that the error message has changed. I'll investigate this further.
I am available for testing...
Hello do you have news?...
I tried all the models and the error are not the same...
Already up to date. venv "L:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.1 Commit hash: 9a8a2a47f63c3d9b04c014a715f95d680f461963
Launching Web UI with arguments: --device-id 1 --port 7861 --medvram --always-batch-cond-uncond --upcast-sampling --precision full --no-half-vae --disable-nan-check --use-cpu interrogate codeformer --api --autolaunch no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception 'Something went wrong.', memory monitor disabled Loading weights [b88d82a292] from L:\stable-diffusion-webui-directml\models\Stable-diffusion\01-Realistic-sd15\nightvisionXLPhotorealisticPortrait_alpha0650Bakedvae.safetensors Running on local URL: http://127.0.0.1:7861
To create a public link, set share=True
in launch()
.
Creating model from config: L:\stable-diffusion-webui-directml\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 10.2s (launcher: 0.5s, import torch: 2.9s, import gradio: 1.1s, setup paths: 0.5s, other imports: 1.3s, opts onchange: 0.3s, list SD models: 0.1s, load scripts: 1.7s, create ui: 0.9s, gradio launch: 0.6s, add APIs: 0.1s).
Applying attention optimization: sdp... done.
Model loaded in 8.1s (load weights from disk: 1.4s, create model: 0.8s, apply weights to model: 1.6s, apply half(): 1.6s, calculate empty prompt: 2.6s).
2023-08-22 14:49:29,623 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
2023-08-22 14:49:33,962 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_h_4b8939.pth
2023-08-22 14:49:42,094 - Inpaint Anything - ERROR - The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
2023-08-22 14:50:10,932 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
2023-08-22 14:50:13,024 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
2023-08-22 14:50:19,079 - Inpaint Anything - ERROR - The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
2023-08-22 14:50:47,431 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
2023-08-22 14:50:48,404 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_b_01ec64.pth
2023-08-22 14:50:49,031 - Inpaint Anything - ERROR - The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
2023-08-22 14:51:15,040 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
Traceback (most recent call last):
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 121, in wrapper
res = func(args, kwargs)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 110, in wrapper
res = func(args, kwargs)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 279, in run_sam
sam_mask_generator = get_sam_mask_generator(sam_checkpoint, anime_style_chk)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 128, in get_sam_mask_generator
sam = sam_model_registry_localmodel_type
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\segment_anything_hq\build_sam.py", line 16, in build_sam_vit_h
return _build_sam(
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\segment_anything_hq\build_sam.py", line 113, in _build_sam
state_dict = torch.load(f)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, pickle_load_args)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
result = unpickler.load()
File "C:\Users\StudioD\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
dispatchkey[0]
File "C:\Users\StudioD\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid
self.append(self.persistent_load(pid))
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
wrap_storage=restore_location(storage, location),
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
result = fn(storage, location)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
2023-08-22 14:51:53,903 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
Traceback (most recent call last):
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 121, in wrapper
res = func(*args, *kwargs)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 110, in wrapper
res = func(args, kwargs)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 279, in run_sam
sam_mask_generator = get_sam_mask_generator(sam_checkpoint, anime_style_chk)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 128, in get_sam_mask_generator
sam = sam_model_registry_localmodel_type
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\segment_anything_hq\build_sam.py", line 29, in build_sam_vit_l
return _build_sam(
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\segment_anything_hq\build_sam.py", line 113, in _build_sam
state_dict = torch.load(f)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, pickle_load_args)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
result = unpickler.load()
File "C:\Users\StudioD\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
dispatchkey[0]
File "C:\Users\StudioD\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid
self.append(self.persistent_load(pid))
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
wrap_storage=restore_location(storage, location),
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
result = fn(storage, location)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
2023-08-22 14:52:09,421 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
Traceback (most recent call last):
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 121, in wrapper
res = func(args, kwargs)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 110, in wrapper
res = func(*args, kwargs)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 279, in run_sam
sam_mask_generator = get_sam_mask_generator(sam_checkpoint, anime_style_chk)
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 128, in get_sam_mask_generator
sam = sam_model_registry_localmodel_type
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\segment_anything_hq\build_sam.py", line 39, in build_sam_vit_b
return _build_sam(
File "L:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\segment_anything_hq\build_sam.py", line 113, in _build_sam
state_dict = torch.load(f)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, pickle_load_args)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
result = unpickler.load()
File "C:\Users\StudioD\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
dispatchkey[0]
File "C:\Users\StudioD\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid
self.append(self.persistent_load(pid))
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
wrap_storage=restore_location(storage, location),
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
result = fn(storage, location)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "L:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
2023-08-22 14:52:33,117 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8
2023-08-22 14:52:36,855 - Inpaint Anything - INFO - FastSamAutomaticMaskGenerator FastSAM-x.pt
Ultralytics YOLOv8.0.159 Python-3.10.6 torch-2.0.0+cpu
2023-08-22 14:52:36,865 - Inpaint Anything - ERROR - Invalid CUDA 'device=privateuseone:1' requested. Use 'device=cpu' or pass valid CUDA device(s) if available, i.e. 'device=0' or 'device=0,1,2,3' for Multi-GPU.
torch.cuda.is_available(): False torch.cuda.device_count(): 2 os.environ['CUDA_VISIBLE_DEVICES']: None
2023-08-22 14:53:01,865 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8 2023-08-22 14:53:01,960 - Inpaint Anything - INFO - FastSamAutomaticMaskGenerator FastSAM-s.pt Ultralytics YOLOv8.0.159 Python-3.10.6 torch-2.0.0+cpu 2023-08-22 14:53:01,963 - Inpaint Anything - ERROR - Invalid CUDA 'device=privateuseone:1' requested. Use 'device=cpu' or pass valid CUDA device(s) if available, i.e. 'device=0' or 'device=0,1,2,3' for Multi-GPU.
torch.cuda.is_available(): False torch.cuda.device_count(): 2 os.environ['CUDA_VISIBLE_DEVICES']: privateuseone:1
2023-08-22 14:53:31,440 - Inpaint Anything - INFO - input_image: (640, 1136, 3) uint8 2023-08-22 14:53:31,699 - Inpaint Anything - INFO - SamAutomaticMaskGenerator mobile_sam.pt 2023-08-22 14:53:31,925 - Inpaint Anything - ERROR - The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
I set up a DirectML environment and conducted various investigations, but I still haven't found a solution.
--use-cpu all and --use-cpu sd, no longer works for me, webui don't start, i don't know why... Is it possible to have something like this : --use-cpu inpaint-anything
I've added a checkbox titled "Run Segment Anything on CPU" to the Inpaint Anything section of the web UI Settings tab. When checked, SAM will run on the CPU.
Thank You very much, now "Run Segment Anything" is OK, "Create Mask" is OK but "Run Inpainting" don't work with all the models...
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
Already up to date. venv "N:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.2 Commit hash: 9fcdca36ae9e4f5b17d5222e990e335827a707ea
Launching Web UI with arguments: --device-id 1 --port 7861 --medvram --always-batch-cond-uncond --upcast-sampling --precision full --no-half-vae --disable-nan-check --use-cpu interrogate codeformer --autolaunch --api no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception 'Something went wrong.', memory monitor disabled Loading weights [7440042bbd] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\SDXL\sd_xl_refiner_1.0.safetensors Creating model from config: N:\stable-diffusion-webui-directml\repositories\generative-models\configs\inference\sd_xl_refiner.yaml Running on local URL: http://127.0.0.1:7861
To create a public link, set share=True
in launch()
.
Startup time: 9.7s (launcher: 0.5s, import torch: 2.9s, import gradio: 0.8s, setup paths: 0.6s, other imports: 1.2s, opts onchange: 0.3s, load scripts: 1.9s, create ui: 1.0s, gradio launch: 0.2s).
Applying attention optimization: sdp... done.
Model loaded in 7.0s (load weights from disk: 0.9s, create model: 0.5s, apply weights to model: 1.5s, apply half(): 1.4s, calculate empty prompt: 2.7s).
2023-08-27 15:03:55,639 - Inpaint Anything - INFO - input_image: (1024, 1024, 3) uint8
2023-08-27 15:03:59,155 - Inpaint Anything - INFO - SAM is running on CPU... (the option has been checked)
2023-08-27 15:03:59,159 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_h_4b8939.pth
2023-08-27 15:05:53,060 - Inpaint Anything - INFO - sam_masks: 22
Processing segments: 100%|█████████████████████████████████████████████████████████████| 22/22 [00:01<00:00, 14.57it/s]
2023-08-27 15:06:35,868 - Inpaint Anything - INFO - Loading model runwayml/stable-diffusion-inpainting
2023-08-27 15:06:35,868 - Inpaint Anything - INFO - local_files_only: True
2023-08-27 15:06:39,455 - Inpaint Anything - INFO - Using sampler DDIM
2023-08-27 15:06:39,459 - Inpaint Anything - INFO - Enable model cpu offload
2023-08-27 15:06:39,468 - Inpaint Anything - INFO - Enable attention slicing
Traceback (most recent call last):
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 134, in wrapper
res = func(args, kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 112, in wrapper
res = func(*args, kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 414, in run_inpaint
generator = torch.Generator(devices.device).manual_seed(seed)
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
2023-08-27 15:06:55,349 - Inpaint Anything - INFO - Loading model runwayml/stable-diffusion-inpainting
2023-08-27 15:06:55,350 - Inpaint Anything - INFO - local_files_only: True
2023-08-27 15:06:58,934 - Inpaint Anything - INFO - Using sampler Euler a
2023-08-27 15:06:58,935 - Inpaint Anything - INFO - Enable model cpu offload
2023-08-27 15:06:58,945 - Inpaint Anything - INFO - Enable attention slicing
Traceback (most recent call last):
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 134, in wrapper
res = func(args, kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 112, in wrapper
res = func(*args, kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 414, in run_inpaint
generator = torch.Generator(devices.device).manual_seed(seed)
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
2023-08-27 15:07:48,796 - Inpaint Anything - INFO - Loading model stabilityai/stable-diffusion-2-inpainting
2023-08-27 15:07:48,796 - Inpaint Anything - INFO - local_files_only: True
2023-08-27 15:07:52,547 - Inpaint Anything - INFO - Using sampler Euler a
2023-08-27 15:07:52,548 - Inpaint Anything - INFO - Enable model cpu offload
2023-08-27 15:07:52,556 - Inpaint Anything - INFO - Enable attention slicing
Traceback (most recent call last):
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 134, in wrapper
res = func(args, *kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 112, in wrapper
res = func(args, **kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 414, in run_inpaint
generator = torch.Generator(devices.device).manual_seed(seed)
RuntimeError: Device type privateuseone is not supported for torch.Generator() api.
Updated to pass the CPU to torch.Generator when the device.type of torch is set to 'privateuseone' in the Inpainting tab process.
AssertionError: Torch not compiled with CUDA enabled
Already up to date. venv "N:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.2 Commit hash: 253a6bbfa651168dea13bb37be17e8a47c183bf2
Launching Web UI with arguments: --device-id 1 --port 7861 --medvram --always-batch-cond-uncond --upcast-sampling --precision full --no-half-vae --disable-nan-check --use-cpu interrogate codeformer --autolaunch --api no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception 'Something went wrong.', memory monitor disabled Loading weights [31e35c80fc] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\SDXL\sd_xl_base_1.0.safetensors Running on local URL: http://127.0.0.1:7861
To create a public link, set share=True
in launch()
.
Creating model from config: N:\stable-diffusion-webui-directml\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 9.2s (launcher: 0.5s, import torch: 2.8s, import gradio: 1.1s, setup paths: 0.5s, other imports: 1.2s, opts onchange: 0.3s, load scripts: 1.0s, create ui: 1.1s, gradio launch: 0.4s, add APIs: 0.1s).
Applying attention optimization: sdp... done.
Model loaded in 8.3s (load weights from disk: 1.5s, create model: 0.6s, apply weights to model: 1.6s, apply half(): 1.7s, calculate empty prompt: 2.8s).
2023-08-28 07:01:24,011 - Inpaint Anything - INFO - input_image: (768, 768, 3) uint8
2023-08-28 07:01:27,538 - Inpaint Anything - INFO - SAM is running on CPU... (the option has been checked)
2023-08-28 07:01:27,543 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_h_4b8939.pth
2023-08-28 07:03:14,556 - Inpaint Anything - INFO - sam_masks: 62
Processing segments: 100%|█████████████████████████████████████████████████████████████| 62/62 [00:00<00:00, 68.19it/s]
2023-08-28 07:04:02,179 - Inpaint Anything - INFO - Loading model stabilityai/stable-diffusion-2-inpainting
2023-08-28 07:04:02,179 - Inpaint Anything - INFO - local_files_only: True
2023-08-28 07:04:05,183 - Inpaint Anything - INFO - Using sampler DDIM
2023-08-28 07:04:05,187 - Inpaint Anything - INFO - Enable model cpu offload
2023-08-28 07:04:05,198 - Inpaint Anything - INFO - Enable attention slicing
Traceback (most recent call last):
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 134, in wrapper
res = func(args, kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\ia_threading.py", line 112, in wrapper
res = func(args, kwargs)
File "N:\stable-diffusion-webui-directml\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 434, in run_inpaint
output_image = pipe(pipe_args_dict).images[0]
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 811, in call
prompt_embeds = self._encode_prompt(
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 423, in _encode_prompt
text_input_ids.to(device),
File "N:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Already up to date. venv "N:\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.5.1 Commit hash: 2c2ca1170bcb7bbd12eef4551b8a42ab16dbe5f7
Launching Web UI with arguments: --medvram --no-half --no-half-vae --precision full --opt-sub-quad-attention --opt-split-attention-v1 --disable-nan-check no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Warning: caught exception '', memory monitor disabled Loading weights [d319cb2188] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\02-Semi-Realistic-sd15\babes_20.safetensors Creating model from config: N:\stable-diffusion-webui-directml\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
. Startup time: 9.0s (launcher: 0.5s, import torch: 3.0s, import gradio: 1.1s, setup paths: 0.5s, other imports: 1.1s, opts onchange: 0.3s, load scripts: 1.6s, create ui: 0.7s, gradio launch: 0.1s). DiffusionWrapper has 859.52 M params. Loading VAE weights specified in settings: N:\stable-diffusion-webui-directml\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors Applying attention optimization: sub-quadratic... done. Model loaded in 2.2s (load weights from disk: 0.7s, create model: 0.4s, apply weights to model: 0.5s, load VAE: 0.2s, calculate empty prompt: 0.4s). Unloaded weights 0.0s. 2023-07-28 09:23:49,129 - Inpaint Anything - INFO - input_image: (512, 512, 3) uint8 2023-07-28 09:23:49,774 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_b_01ec64.pth 2023-07-28 09:23:53,862 - Inpaint Anything - ERROR - The GPU device does not support Double (Float64) operations! Loading weights [d319cb2188] from N:\stable-diffusion-webui-directml\models\Stable-diffusion\02-Semi-Realistic-sd15\babes_20.safetensors Creating model from config: N:\stable-diffusion-webui-directml\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Loading VAE weights specified in settings: N:\stable-diffusion-webui-directml\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors Applying attention optimization: sub-quadratic... done. Model loaded in 1.4s (create model: 0.4s, apply weights to model: 0.5s, load VAE: 0.2s, calculate empty prompt: 0.2s).