Installed via URL through automatic1111. When using one of the example prompts on the LCM txt2img tab I receive the following error in the SD console window
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Let me know if more information is needed to diagnose this.
`Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.8.0, num models: 13
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-19 17:06:31,945 - ControlNet - INFO - ControlNet v1.1.455
Loading weights [9d7b14893a] from D:\OldD\AI\stable-diffusion-webui\models\Stable-diffusion\realDream_sdxlPony9.safetensors
2024-08-19 17:06:32,468 - ControlNet - INFO - ControlNet UI callback registered.
D:\OldD\AI\stable-diffusion-webui\modules\gradio_extensons.py:25: GradioUnusedKwargWarning: You have unused kwarg parameters in Gallery, please remove them: {'grid': [2]}
res = original_IOComponent_init(self, *args, **kwargs)
Creating model from config: D:\OldD\AI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Startup time: 12.5s (prepare environment: 4.2s, import torch: 2.8s, import gradio: 0.7s, setup paths: 0.8s, initialize shared: 0.3s, other imports: 0.4s, load scripts: 2.2s, create ui: 0.7s, gradio launch: 0.3s).
Applying attention optimization: sdp... done.
Model loaded in 9.6s (load weights from disk: 0.8s, create model: 0.6s, apply weights to model: 4.8s, apply fp8: 3.0s, move model to device: 0.1s, calculate empty prompt: 0.2s).
The config attributes {'requires_safety_checker': True} were passed to LatentConsistencyModelPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
Keyword arguments {'requires_safety_checker': True} are not expected by LatentConsistencyModelPipeline and will be ignored.
Traceback (most recent call last):
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, *kwargs)
File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 112, in generate
result = pipe(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 195, in call
prompt_embeds = self._encode_prompt(
File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 98, in _encode_prompt
prompt_embeds = self.text_encoder(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, *kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(args, kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(args, kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward
inputs_embeds = self.token_embedding(input_ids)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(args, **kwargs)
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
return F.embedding(
File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)`
Installed via URL through automatic1111. When using one of the example prompts on the LCM txt2img tab I receive the following error in the SD console window
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Let me know if more information is needed to diagnose this.
System Info
Settings in UI
System Info in text form
(sd-extension-system-info)
Full Console Output
`Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1 Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2 Launching Web UI with arguments: No module 'xformers'. Proceeding without it. Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.8.0, num models: 13 CivitAI Browser+: Aria2 RPC started ControlNet preprocessor location: D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2024-08-19 17:06:31,945 - ControlNet - INFO - ControlNet v1.1.455 Loading weights [9d7b14893a] from D:\OldD\AI\stable-diffusion-webui\models\Stable-diffusion\realDream_sdxlPony9.safetensors 2024-08-19 17:06:32,468 - ControlNet - INFO - ControlNet UI callback registered. D:\OldD\AI\stable-diffusion-webui\modules\gradio_extensons.py:25: GradioUnusedKwargWarning: You have unused kwarg parameters in Gallery, please remove them: {'grid': [2]} res = original_IOComponent_init(self, *args, **kwargs) Creating model from config: D:\OldD\AI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
. D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning:resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
. warnings.warn( Startup time: 12.5s (prepare environment: 4.2s, import torch: 2.8s, import gradio: 0.7s, setup paths: 0.8s, initialize shared: 0.3s, other imports: 0.4s, load scripts: 2.2s, create ui: 0.7s, gradio launch: 0.3s). Applying attention optimization: sdp... done. Model loaded in 9.6s (load weights from disk: 0.8s, create model: 0.6s, apply weights to model: 4.8s, apply fp8: 3.0s, move model to device: 0.1s, calculate empty prompt: 0.2s). The config attributes {'requires_safety_checker': True} were passed to LatentConsistencyModelPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'requires_safety_checker': True} are not expected by LatentConsistencyModelPipeline and will be ignored. Traceback (most recent call last): File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 112, in generate result = pipe( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 195, in call prompt_embeds = self._encode_prompt( File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 98, in _encode_prompt prompt_embeds = self.text_encoder( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward return self.text_model( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward inputs_embeds = self.token_embedding(input_ids) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2233, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)`