0xbitches / sd-webui-lcm

Latent Consistency Model for AUTOMATIC1111 Stable Diffusion WebUI
MIT License
611 stars 43 forks source link

RuntimeError: Expected all tensors to be on the same device #55

Open Cardnyl opened 3 weeks ago

Cardnyl commented 3 weeks ago

Installed via URL through automatic1111. When using one of the example prompts on the LCM txt2img tab I receive the following error in the SD console window

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Let me know if more information is needed to diagnose this.

System Info

SystemInfo

Settings in UI

LCM

System Info in text form

(sd-extension-system-info)

{ "date": "Mon Aug 19 17:26:10 2024", "timestamp": "17:26:14", "uptime": "Mon Aug 19 17:06:29 2024", "version": { "app": "stable-diffusion-webui", "updated": "2024-07-27", "hash": "82a973c0", "url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/master" }, "torch": "2.1.2+cu121 autocast half", "gpu": { "device": "NVIDIA GeForce RTX 3080 Ti (1) (sm_90) (8, 6)", "cuda": "12.1", "cudnn": 8801, "driver": "555.85" }, "state": { "started": "Mon Aug 19 17:26:14 2024", "step": "0 / 0", "jobs": "0 / 0", "flags": "", "job": "", "text-info": "" }, "memory": { "ram": { "free": 24.65, "used": 7.28, "total": 31.92 }, "gpu": { "free": 6.67, "used": 5.33, "total": 12 }, "gpu-active": { "current": 3.66, "peak": 6.37 }, "gpu-allocated": { "current": 3.66, "peak": 6.37 }, "gpu-reserved": { "current": 4.08, "peak": 6.92 }, "gpu-inactive": { "current": 0.42, "peak": 0.87 }, "events": { "retries": 0, "oom": 0 }, "utilization": 0 }, "optimizations": [ "none" ], "libs": { "xformers": "0.0.23.post1", "diffusers": "0.29.2", "transformers": "4.30.2" }, "repos": { "Stable Diffusion": "[cf1d67a] 2023-03-25", "Stable Diffusion XL": "[45c443b] 2023-07-26", "BLIP": "[48211a1] 2022-06-07", "k_diffusion": "[ab527a9] 2023-08-12" }, "device": { "active": "cuda", "dtype": "torch.float16", "vae": "torch.float16", "unet": "torch.float16" }, "model": { "configured": { "base": "realDream_sdxlPony9.safetensors [9d7b14893a]", "refiner": "", "vae": "Automatic" }, "loaded": { "base": "D:\OldD\AI\stable-diffusion-webui\models\Stable-diffusion\realDream_sdxlPony9.safetensors", "refiner": "", "vae": null } }, "schedulers": [ "DDIM", "DDIM CFG++", "DPM adaptive", "DPM fast", "DPM++ 2M", "DPM++ 2M SDE", "DPM++ 2M SDE Heun", "DPM++ 2S a", "DPM++ 3M SDE", "DPM++ SDE", "DPM2", "DPM2 a", "Euler", "Euler a", "Heun", "LCM", "LMS", "PLMS", "Restart", "UniPC" ], "extensions": [ "LDSR (enabled builtin)", "Lora (enabled builtin)", "ScuNET (enabled builtin)", "SwinIR (enabled builtin)", "a1111-sd-webui-tagcomplete (enabled)", "adetailer (enabled)", "canvas-zoom (enabled)", "canvas-zoom-and-pan (enabled builtin)", "extra-options-section (enabled builtin)", "hypertile (enabled builtin)", "mobile (enabled builtin)", "postprocessing-for-training (enabled builtin)", "prompt-bracket-checker (enabled builtin)", "sd-civitai-browser-plus (enabled)", "sd-extension-system-info (enabled)", "sd-webui-controlnet (enabled)", "sd-webui-lcm (enabled)", "soft-inpainting (enabled builtin)", "ultimate-upscale-for-automatic1111 (enabled)" ], "platform": { "arch": "AMD64", "cpu": "AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD", "system": "Windows", "release": "Windows-10-10.0.19045-SP0", "python": "3.10.6" }, "crossattention": "sdp - scaled dot product", "backend": "", "pipeline": "" }

Full Console Output

`Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: v1.10.1 Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2 Launching Web UI with arguments: No module 'xformers'. Proceeding without it. Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.8.0, num models: 13 CivitAI Browser+: Aria2 RPC started ControlNet preprocessor location: D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2024-08-19 17:06:31,945 - ControlNet - INFO - ControlNet v1.1.455 Loading weights [9d7b14893a] from D:\OldD\AI\stable-diffusion-webui\models\Stable-diffusion\realDream_sdxlPony9.safetensors 2024-08-19 17:06:32,468 - ControlNet - INFO - ControlNet UI callback registered. D:\OldD\AI\stable-diffusion-webui\modules\gradio_extensons.py:25: GradioUnusedKwargWarning: You have unused kwarg parameters in Gallery, please remove them: {'grid': [2]} res = original_IOComponent_init(self, *args, **kwargs) Creating model from config: D:\OldD\AI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( Startup time: 12.5s (prepare environment: 4.2s, import torch: 2.8s, import gradio: 0.7s, setup paths: 0.8s, initialize shared: 0.3s, other imports: 0.4s, load scripts: 2.2s, create ui: 0.7s, gradio launch: 0.3s). Applying attention optimization: sdp... done. Model loaded in 9.6s (load weights from disk: 0.8s, create model: 0.6s, apply weights to model: 4.8s, apply fp8: 3.0s, move model to device: 0.1s, calculate empty prompt: 0.2s). The config attributes {'requires_safety_checker': True} were passed to LatentConsistencyModelPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file. Keyword arguments {'requires_safety_checker': True} are not expected by LatentConsistencyModelPipeline and will be ignored. Traceback (most recent call last): File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 112, in generate result = pipe( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 195, in call prompt_embeds = self._encode_prompt( File "D:\OldD\AI\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_pipeline.py", line 98, in _encode_prompt prompt_embeds = self.text_encoder( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward return self.text_model( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward inputs_embeds = self.token_embedding(input_ids) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "D:\OldD\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2233, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)`

marc2608 commented 3 weeks ago

Same error for me. Capture d'écran 2024-08-22 203606