Stability-AI / StableSwarmUI

StableSwarmUI, A Modular Stable Diffusion Web-User-Interface, with an emphasis on making powertools easily accessible, high performance, and extensibility.
MIT License
4.59k stars 369 forks source link

"Invalid operation: All available backends failed to load the model." #389

Closed ByblosHex closed 5 months ago

ByblosHex commented 5 months ago

I am able to load the following models individually and produce images with them. SD1.5, SDXL and SD3. I am also able to produce a grid using all 3 of these models to compare them. However, after I do it once I am unable to do it again unless I close and relaunch SwarmUI. Subsequent grid generations after the first will produce images with the SD1.5 and SDXL models, but it will error out when it's time to make the SD3 image. 17:33:23.402 [Info] Completed gen #2 (of 3) ... Set: 'Model=XL/sd_xl_base_1.0.safetensors', file 'sd_xl_base_10' 17:33:23.406 [Info] Generated an image in 33.98 (prep) and 15.78 (gen) seconds 17:33:24.421 [Error] GridGen stopped while running: { "error": "Invalid operation: All available backends failed to load the model."

mcmonkey4eva commented 5 months ago

Can you check Server -> Logs -> Debug from when this error happened? There's likely an explanation of what went wrong in the comfy raw output

ByblosHex commented 5 months ago

Can you check Server -> Logs -> Debug from when this error happened? There's likely an explanation of what went wrong in the comfy raw output

2024-06-14 11:47:35.508 [Debug] [GridGenerator] Pre-prepping 1/3 ... Set: Model=Large/v1-5-pruned-emaonly.safetensors, file v15prunedemaonl 2024-06-14 11:47:35.508 [Debug] Grid Gen micro-pausing to maintain order as 0 < 20 2024-06-14 11:47:35.508 [Debug] [BackendHandler] Backend request #708 for model Large/v1-5-pruned-emaonly.safetensors, maxWait=7.00:00:00. 2024-06-14 11:47:35.508 [Debug] [BackendHandler] backend #0 will load a model: Large/v1-5-pruned-emaonly.safetensors, with 1 requests waiting for 0 seconds 2024-06-14 11:47:35.574 [Debug] [GridGenerator] Pre-prepping 2/3 ... Set: Model=XL/sd_xl_base_1.0.safetensors, file sd_xl_base_10 2024-06-14 11:47:35.574 [Debug] Grid Gen micro-pausing to maintain order as 1 < 20 2024-06-14 11:47:35.575 [Debug] [BackendHandler] Backend request #709 for model XL/sd_xl_base_1.0.safetensors, maxWait=7.00:00:00. 2024-06-14 11:47:35.599 [Debug] [GridGenerator] Pre-prepping 3/3 ... Set: Model=SD3/sd3_medium.safetensors, file sd3_medium 2024-06-14 11:47:35.599 [Debug] Grid Gen micro-pausing to maintain order as 2 < 20 2024-06-14 11:47:35.599 [Debug] [BackendHandler] Backend request #710 for model SD3/sd3_medium.safetensors, maxWait=7.00:00:00. 2024-06-14 11:47:37.608 [Debug] ComfyUI-0 on port 7821 stderr: got prompt 2024-06-14 11:47:37.609 [Debug] ComfyUI-0 on port 7821 stdout: [rgthree] Using rgthree's optimized recursive execution. 2024-06-14 11:47:38.859 [Debug] ComfyUI-0 on port 7821 stderr: model_type EPS 2024-06-14 11:47:45.340 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:47:45.341 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:47:47.714 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load AutoencoderKL 2024-06-14 11:47:47.714 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:47:48.147 [Debug] ComfyUI-0 on port 7821 stderr: Prompt executed in 10.54 seconds 2024-06-14 11:47:48.271 [Debug] [BackendHandler] backend #0 loaded model, returning to pool 2024-06-14 11:47:48.693 [Debug] [BackendHandler] Backend request #708 found correct model on #0 2024-06-14 11:47:48.693 [Debug] [BackendHandler] Backend request #708 finished. 2024-06-14 11:47:48.693 [Debug] [BackendHandler] backend #0 will load a model: SD3/sd3_medium.safetensors, with 1 requests waiting for 13.1 seconds 2024-06-14 11:47:48.693 [Debug] [BackendHandler] backend #0 will load a model: XL/sd_xl_base_1.0.safetensors, with 1 requests waiting for 13.1 seconds 2024-06-14 11:47:48.729 [Debug] ComfyUI-0 on port 7821 stderr: got prompt 2024-06-14 11:47:48.729 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load SD1ClipModel 2024-06-14 11:47:48.729 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:47:48.729 [Debug] ComfyUI-0 on port 7821 stdout: [rgthree] Using rgthree's optimized recursive execution. 2024-06-14 11:47:48.752 [Debug] ComfyUI-0 on port 7821 stderr: got prompt 2024-06-14 11:47:48.753 [Debug] ComfyUI-0 on port 7821 stderr: got prompt 2024-06-14 11:47:48.852 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load BaseModel 2024-06-14 11:47:48.852 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:47:49.112 [Debug] ComfyUI-0 on port 7821 stderr: 2024-06-14 11:47:49.442 [Debug] ComfyUI-0 on port 7821 stderr: 0%| | 0/28 [00:00<?, ?it/s] 2024-06-14 11:47:49.548 [Debug] ComfyUI-0 on port 7821 stderr: 4%|ΓûÄ | 1/28 [00:00<00:08, 3.04it/s] 2024-06-14 11:47:49.829 [Debug] ComfyUI-0 on port 7821 stderr: 7%|Γûï | 2/28 [00:00<00:05, 5.05it/s] 2024-06-14 11:47:50.123 [Debug] ComfyUI-0 on port 7821 stderr: 11%|Γûê | 3/28 [00:00<00:05, 4.24it/s] 2024-06-14 11:47:50.404 [Debug] ComfyUI-0 on port 7821 stderr: 14%|ΓûêΓû | 4/28 [00:01<00:06, 3.86it/s] 2024-06-14 11:47:50.699 [Debug] ComfyUI-0 on port 7821 stderr: 18%|ΓûêΓûè | 5/28 [00:01<00:06, 3.74it/s] 2024-06-14 11:47:50.980 [Debug] ComfyUI-0 on port 7821 stderr: 21%|ΓûêΓûêΓû | 6/28 [00:01<00:06, 3.62it/s] 2024-06-14 11:47:51.263 [Debug] ComfyUI-0 on port 7821 stderr: 25%|ΓûêΓûêΓûî | 7/28 [00:01<00:05, 3.60it/s] 2024-06-14 11:47:51.545 [Debug] ComfyUI-0 on port 7821 stderr: 29%|ΓûêΓûêΓûè | 8/28 [00:02<00:05, 3.58it/s] 2024-06-14 11:47:51.821 [Debug] ComfyUI-0 on port 7821 stderr: 32%|ΓûêΓûêΓûêΓû | 9/28 [00:02<00:05, 3.57it/s] 2024-06-14 11:47:52.101 [Debug] ComfyUI-0 on port 7821 stderr: 36%|ΓûêΓûêΓûêΓûî | 10/28 [00:02<00:05, 3.58it/s] 2024-06-14 11:47:52.391 [Debug] ComfyUI-0 on port 7821 stderr: 39%|ΓûêΓûêΓûêΓûë | 11/28 [00:02<00:04, 3.58it/s] 2024-06-14 11:47:52.672 [Debug] ComfyUI-0 on port 7821 stderr: 43%|ΓûêΓûêΓûêΓûêΓûÄ | 12/28 [00:03<00:04, 3.54it/s] 2024-06-14 11:47:52.950 [Debug] ComfyUI-0 on port 7821 stderr: 46%|ΓûêΓûêΓûêΓûêΓûï | 13/28 [00:03<00:04, 3.55it/s] 2024-06-14 11:47:53.232 [Debug] ComfyUI-0 on port 7821 stderr: 50%|ΓûêΓûêΓûêΓûêΓûê | 14/28 [00:03<00:03, 3.56it/s] 2024-06-14 11:47:53.510 [Debug] ComfyUI-0 on port 7821 stderr: 54%|ΓûêΓûêΓûêΓûêΓûêΓûÄ | 15/28 [00:04<00:03, 3.56it/s] 2024-06-14 11:47:53.791 [Debug] ComfyUI-0 on port 7821 stderr: 57%|ΓûêΓûêΓûêΓûêΓûêΓûï | 16/28 [00:04<00:03, 3.57it/s] 2024-06-14 11:47:54.072 [Debug] ComfyUI-0 on port 7821 stderr: 61%|ΓûêΓûêΓûêΓûêΓûêΓûê | 17/28 [00:04<00:03, 3.56it/s] 2024-06-14 11:47:54.351 [Debug] ComfyUI-0 on port 7821 stderr: 64%|ΓûêΓûêΓûêΓûêΓûêΓûêΓû | 18/28 [00:04<00:02, 3.56it/s] 2024-06-14 11:47:54.628 [Debug] ComfyUI-0 on port 7821 stderr: 68%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûè | 19/28 [00:05<00:02, 3.57it/s] 2024-06-14 11:47:54.924 [Debug] ComfyUI-0 on port 7821 stderr: 71%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓû | 20/28 [00:05<00:02, 3.58it/s] 2024-06-14 11:47:55.204 [Debug] ComfyUI-0 on port 7821 stderr: 75%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûî | 21/28 [00:05<00:01, 3.52it/s] 2024-06-14 11:47:55.486 [Debug] ComfyUI-0 on port 7821 stderr: 79%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûè | 22/28 [00:06<00:01, 3.54it/s] 2024-06-14 11:47:55.763 [Debug] ComfyUI-0 on port 7821 stderr: 82%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓû | 23/28 [00:06<00:01, 3.54it/s] 2024-06-14 11:47:56.043 [Debug] ComfyUI-0 on port 7821 stderr: 86%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûî | 24/28 [00:06<00:01, 3.56it/s] 2024-06-14 11:47:56.336 [Debug] ComfyUI-0 on port 7821 stderr: 89%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûë | 25/28 [00:06<00:00, 3.56it/s] 2024-06-14 11:47:56.618 [Debug] ComfyUI-0 on port 7821 stderr: 93%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûÄ| 26/28 [00:07<00:00, 3.52it/s] 2024-06-14 11:47:56.897 [Debug] ComfyUI-0 on port 7821 stderr: 96%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûï| 27/28 [00:07<00:00, 3.52it/s] 2024-06-14 11:47:56.898 [Debug] ComfyUI-0 on port 7821 stderr: 100%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûê| 28/28 [00:07<00:00, 3.54it/s] 2024-06-14 11:47:56.898 [Debug] ComfyUI-0 on port 7821 stderr: 100%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûê| 28/28 [00:07<00:00, 3.60it/s] 2024-06-14 11:47:57.546 [Debug] ComfyUI-0 on port 7821 stderr: Prompt executed in 8.85 seconds 2024-06-14 11:47:57.546 [Debug] ComfyUI-0 on port 7821 stdout: [rgthree] Using rgthree's optimized recursive execution. 2024-06-14 11:47:57.686 [Info] Completed gen #1 (of 3) ... Set: 'Model=Large/v1-5-pruned-emaonly.safetensors', file 'v15prunedemaonl' 2024-06-14 11:47:57.688 [Info] Generated an image in 13.23 (prep) and 8.89 (gen) seconds 2024-06-14 11:47:58.537 [Debug] ComfyUI-0 on port 7821 stderr: model_type EPS 2024-06-14 11:48:08.349 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:48:08.350 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:48:13.201 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load AutoencoderKL 2024-06-14 11:48:13.201 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:48:13.532 [Debug] ComfyUI-0 on port 7821 stderr: Prompt executed in 15.99 seconds 2024-06-14 11:48:13.670 [Debug] ComfyUI-0 on port 7821 stdout: [rgthree] Using rgthree's optimized recursive execution. 2024-06-14 11:48:13.678 [Debug] [BackendHandler] backend #0 loaded model, returning to pool 2024-06-14 11:48:13.811 [Debug] ComfyUI-0 on port 7821 stderr: model_type FLOW 2024-06-14 11:48:13.822 [Debug] [BackendHandler] Backend request #709 found correct model on #0 2024-06-14 11:48:13.822 [Debug] [BackendHandler] Backend request #709 finished. 2024-06-14 11:48:13.822 [Debug] ComfyUI-0 on port 7821 stderr: got prompt 2024-06-14 11:48:21.649 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:48:21.650 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:48:22.202 [Debug] ComfyUI-0 on port 7821 stderr: no CLIP/text encoder weights in checkpoint, the text encoder model will not be loaded. 2024-06-14 11:48:22.664 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load AutoencodingEngine 2024-06-14 11:48:22.664 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:48:23.278 [Debug] ComfyUI-0 on port 7821 stderr: !!! Exception during processing!!! Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 32, 32] to have 16 channels, but got 4 channels instead 2024-06-14 11:48:23.322 [Warning] ComfyUI-0 on port 7821 stderr: Traceback (most recent call last): 2024-06-14 11:48:23.322 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 151, in recursive_execute 2024-06-14 11:48:23.323 [Warning] ComfyUI-0 on port 7821 stderr: output_data, output_ui = get_output_data(obj, input_data_all) 2024-06-14 11:48:23.323 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.323 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 81, in get_output_data 2024-06-14 11:48:23.324 [Warning] ComfyUI-0 on port 7821 stderr: return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) 2024-06-14 11:48:23.324 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.324 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 74, in map_node_over_list 2024-06-14 11:48:23.325 [Debug] ComfyUI-0 on port 7821 stdout: [rgthree] Using rgthree's optimized recursive execution. 2024-06-14 11:48:23.325 [Error] [BackendHandler] backend #0 failed to load model with error: System.AggregateException: One or more errors occurred. (ComfyUI execution error: Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 32, 32] to have 16 channels, but got 4 channels instead) ---> System.InvalidOperationException: ComfyUI execution error: Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 32, 32] to have 16 channels, but got 4 channels instead at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.GetAllImagesForHistory(JToken output, CancellationToken interrupt) in E:\StableSwarmUI\StableSwarmUI\src\BuiltinExtensions\ComfyUIBackend\ComfyUIAPIAbstractBackend.cs:line 451 at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.AwaitJobLive(String workflow, String batchId, Action1 takeOutput, T2IParamInput user_input, CancellationToken interrupt) in E:\StableSwarmUI\StableSwarmUI\src\BuiltinExtensions\ComfyUIBackend\ComfyUIAPIAbstractBackend.cs:line 382 at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.LoadModel(T2IModel model) in E:\StableSwarmUI\StableSwarmUI\src\BuiltinExtensions\ComfyUIBackend\ComfyUIAPIAbstractBackend.cs:line 768 --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at StableSwarmUI.Backends.BackendHandler.<>c__DisplayClass50_3.b__13() in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 1123 2024-06-14 11:48:23.325 [Warning] ComfyUI-0 on port 7821 stderr: results.append(getattr(obj, func)(slice_dict(input_data_all, i))) 2024-06-14 11:48:23.325 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.326 [Warning] [BackendHandler] backend #0 failed to load model SD3/sd3_medium.safetensors 2024-06-14 11:48:23.326 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\nodes.py", line 268, in decode 2024-06-14 11:48:23.326 [Warning] ComfyUI-0 on port 7821 stderr: return (vae.decode(samples["samples"]), ) 2024-06-14 11:48:23.327 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.327 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\comfy\sd.py", line 309, in decode 2024-06-14 11:48:23.327 [Debug] Will deny backends: 0 2024-06-14 11:48:23.327 [Warning] ComfyUI-0 on port 7821 stderr: pixel_samples[x:x+batch_number] = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float()) 2024-06-14 11:48:23.327 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.328 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\comfy\ldm\models\autoencoder.py", line 137, in decode 2024-06-14 11:48:23.328 [Warning] ComfyUI-0 on port 7821 stderr: x = self.decoder(z, kwargs) 2024-06-14 11:48:23.328 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.328 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl 2024-06-14 11:48:23.329 [Warning] ComfyUI-0 on port 7821 stderr: return self._call_impl(*args, kwargs) 2024-06-14 11:48:23.329 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.329 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl 2024-06-14 11:48:23.329 [Warning] ComfyUI-0 on port 7821 stderr: return forward_call(*args, *kwargs) 2024-06-14 11:48:23.330 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.330 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 625, in forward 2024-06-14 11:48:23.330 [Warning] ComfyUI-0 on port 7821 stderr: h = self.conv_in(z) 2024-06-14 11:48:23.331 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.331 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl 2024-06-14 11:48:23.331 [Warning] ComfyUI-0 on port 7821 stderr: return self._call_impl(args, kwargs) 2024-06-14 11:48:23.331 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.332 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl 2024-06-14 11:48:23.332 [Warning] ComfyUI-0 on port 7821 stderr: return forward_call(*args, *kwargs) 2024-06-14 11:48:23.332 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.333 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\ComfyUI\comfy\ops.py", line 66, in forward 2024-06-14 11:48:23.333 [Warning] ComfyUI-0 on port 7821 stderr: return super().forward(args, **kwargs) 2024-06-14 11:48:23.333 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.333 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 460, in forward 2024-06-14 11:48:23.334 [Warning] ComfyUI-0 on port 7821 stderr: return self._conv_forward(input, self.weight, self.bias) 2024-06-14 11:48:23.334 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.334 [Warning] ComfyUI-0 on port 7821 stderr: File "E:\StableSwarmUI\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward 2024-06-14 11:48:23.334 [Warning] ComfyUI-0 on port 7821 stderr: return F.conv2d(input, weight, bias, self.stride, 2024-06-14 11:48:23.335 [Warning] ComfyUI-0 on port 7821 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-14 11:48:23.335 [Warning] ComfyUI-0 on port 7821 stderr: RuntimeError: Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 32, 32] to have 16 channels, but got 4 channels instead 2024-06-14 11:48:23.335 [Warning] ComfyUI-0 on port 7821 stderr: 2024-06-14 11:48:23.335 [Debug] ComfyUI-0 on port 7821 stderr: Prompt executed in 9.65 seconds 2024-06-14 11:48:23.510 [Debug] ComfyUI-0 on port 7821 stderr: model_type EPS 2024-06-14 11:48:23.916 [Warning] [BackendHandler] All backends failed to load the model! Cannot generate anything. 2024-06-14 11:48:23.917 [Error] [BackendHandler] Backend request #710 failed: System.InvalidOperationException: All available backends failed to load the model. at StableSwarmUI.Backends.BackendHandler.LoadHighestPressureNow(List1 possible, List1 available, Action releasePressure, CancellationToken cancel) in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 1080 at StableSwarmUI.Backends.BackendHandler.T2IBackendRequest.TryFind() in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 842 at StableSwarmUI.Backends.BackendHandler.RequestHandlingLoop() in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 970 2024-06-14 11:48:23.917 [Error] [BackendHandler] Backend request #710 failed: System.InvalidOperationException: All available backends failed to load the model. at StableSwarmUI.Backends.BackendHandler.LoadHighestPressureNow(List1 possible, List1 available, Action releasePressure, CancellationToken cancel) in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 1080 at StableSwarmUI.Backends.BackendHandler.T2IBackendRequest.TryFind() in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 842 at StableSwarmUI.Backends.BackendHandler.RequestHandlingLoop() in E:\StableSwarmUI\StableSwarmUI\src\Backends\BackendHandler.cs:line 970 2024-06-14 11:48:23.917 [Debug] [BackendHandler] Backend request #710 finished. 2024-06-14 11:48:23.917 [Error] Grid generator hit error: Invalid operation: All available backends failed to load the model. 2024-06-14 11:48:24.300 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:48:24.302 [Debug] ComfyUI-0 on port 7821 stderr: Using pytorch attention in VAE 2024-06-14 11:48:26.026 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load SDXLClipModel 2024-06-14 11:48:26.026 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:48:26.621 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load SDXL 2024-06-14 11:48:26.621 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:48:27.786 [Debug] ComfyUI-0 on port 7821 stderr: 2024-06-14 11:48:27.979 [Debug] ComfyUI-0 on port 7821 stderr: 0%| | 0/28 [00:00<?, ?it/s] 2024-06-14 11:48:28.291 [Debug] ComfyUI-0 on port 7821 stderr: 4%|ΓûÄ | 1/28 [00:00<00:05, 5.20it/s] 2024-06-14 11:48:28.599 [Debug] ComfyUI-0 on port 7821 stderr: 7%|Γûï | 2/28 [00:00<00:06, 3.80it/s] 2024-06-14 11:48:28.908 [Debug] ComfyUI-0 on port 7821 stderr: 11%|Γûê | 3/28 [00:00<00:07, 3.53it/s] 2024-06-14 11:48:29.219 [Debug] ComfyUI-0 on port 7821 stderr: 14%|ΓûêΓû | 4/28 [00:01<00:07, 3.40it/s] 2024-06-14 11:48:29.539 [Debug] ComfyUI-0 on port 7821 stderr: 18%|ΓûêΓûè | 5/28 [00:01<00:06, 3.33it/s] 2024-06-14 11:48:29.850 [Debug] ComfyUI-0 on port 7821 stderr: 21%|ΓûêΓûêΓû | 6/28 [00:01<00:06, 3.26it/s] 2024-06-14 11:48:30.162 [Debug] ComfyUI-0 on port 7821 stderr: 25%|ΓûêΓûêΓûî | 7/28 [00:02<00:06, 3.25it/s] 2024-06-14 11:48:30.471 [Debug] ComfyUI-0 on port 7821 stderr: 29%|ΓûêΓûêΓûè | 8/28 [00:02<00:06, 3.23it/s] 2024-06-14 11:48:30.796 [Debug] ComfyUI-0 on port 7821 stderr: 32%|ΓûêΓûêΓûêΓû | 9/28 [00:02<00:05, 3.23it/s] 2024-06-14 11:48:31.107 [Debug] ComfyUI-0 on port 7821 stderr: 36%|ΓûêΓûêΓûêΓûî | 10/28 [00:03<00:05, 3.18it/s] 2024-06-14 11:48:31.414 [Debug] ComfyUI-0 on port 7821 stderr: 39%|ΓûêΓûêΓûêΓûë | 11/28 [00:03<00:05, 3.19it/s] 2024-06-14 11:48:31.738 [Debug] ComfyUI-0 on port 7821 stderr: 43%|ΓûêΓûêΓûêΓûêΓûÄ | 12/28 [00:03<00:04, 3.21it/s] 2024-06-14 11:48:32.045 [Debug] ComfyUI-0 on port 7821 stderr: 46%|ΓûêΓûêΓûêΓûêΓûï | 13/28 [00:03<00:04, 3.17it/s] 2024-06-14 11:48:32.367 [Debug] ComfyUI-0 on port 7821 stderr: 50%|ΓûêΓûêΓûêΓûêΓûê | 14/28 [00:04<00:04, 3.20it/s] 2024-06-14 11:48:32.677 [Debug] ComfyUI-0 on port 7821 stderr: 54%|ΓûêΓûêΓûêΓûêΓûêΓûÄ | 15/28 [00:04<00:04, 3.17it/s] 2024-06-14 11:48:32.989 [Debug] ComfyUI-0 on port 7821 stderr: 57%|ΓûêΓûêΓûêΓûêΓûêΓûï | 16/28 [00:04<00:03, 3.19it/s] 2024-06-14 11:48:33.300 [Debug] ComfyUI-0 on port 7821 stderr: 61%|ΓûêΓûêΓûêΓûêΓûêΓûê | 17/28 [00:05<00:03, 3.19it/s] 2024-06-14 11:48:33.612 [Debug] ComfyUI-0 on port 7821 stderr: 64%|ΓûêΓûêΓûêΓûêΓûêΓûêΓû | 18/28 [00:05<00:03, 3.20it/s] 2024-06-14 11:48:33.936 [Debug] ComfyUI-0 on port 7821 stderr: 68%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûè | 19/28 [00:05<00:02, 3.20it/s] 2024-06-14 11:48:34.245 [Debug] ComfyUI-0 on port 7821 stderr: 71%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓû | 20/28 [00:06<00:02, 3.17it/s] 2024-06-14 11:48:34.554 [Debug] ComfyUI-0 on port 7821 stderr: 75%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûî | 21/28 [00:06<00:02, 3.19it/s] 2024-06-14 11:48:34.878 [Debug] ComfyUI-0 on port 7821 stderr: 79%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûè | 22/28 [00:06<00:01, 3.20it/s] 2024-06-14 11:48:35.188 [Debug] ComfyUI-0 on port 7821 stderr: 82%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓû | 23/28 [00:07<00:01, 3.17it/s] 2024-06-14 11:48:35.510 [Debug] ComfyUI-0 on port 7821 stderr: 86%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûî | 24/28 [00:07<00:01, 3.18it/s] 2024-06-14 11:48:35.817 [Debug] ComfyUI-0 on port 7821 stderr: 89%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûë | 25/28 [00:07<00:00, 3.16it/s] 2024-06-14 11:48:36.137 [Debug] ComfyUI-0 on port 7821 stderr: 93%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûÄ| 26/28 [00:08<00:00, 3.19it/s] 2024-06-14 11:48:36.449 [Debug] ComfyUI-0 on port 7821 stderr: 96%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûï| 27/28 [00:08<00:00, 3.17it/s] 2024-06-14 11:48:36.449 [Debug] ComfyUI-0 on port 7821 stderr: 100%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûê| 28/28 [00:08<00:00, 3.18it/s] 2024-06-14 11:48:36.449 [Debug] ComfyUI-0 on port 7821 stderr: 100%|ΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûêΓûê| 28/28 [00:08<00:00, 3.23it/s] 2024-06-14 11:48:36.578 [Debug] ComfyUI-0 on port 7821 stderr: Requested to load AutoencoderKL 2024-06-14 11:48:36.578 [Debug] ComfyUI-0 on port 7821 stderr: Loading 1 new model 2024-06-14 11:48:38.028 [Debug] ComfyUI-0 on port 7821 stderr: Prompt executed in 14.70 seconds 2024-06-14 11:48:38.280 [Info] Completed gen #2 (of 3) ... Set: 'Model=XL/sd_xl_base_1.0.safetensors', file 'sd_xl_base_10' 2024-06-14 11:48:38.287 [Info] Generated an image in 47.75 (prep) and 14.92 (gen) seconds 2024-06-14 11:48:39.289 [Error] GridGen stopped while running: { "error": "Invalid operation: All available backends failed to load the model." }`

mcmonkey4eva commented 5 months ago

Oh do you possibly have a VAE selected? That error looks like you have an SDv1/SDXL VAE selected on an SD3 model - they're not intercompatible

ByblosHex commented 5 months ago

Oh do you possibly have a VAE selected? That error looks like you have an SDv1/SDXL VAE selected on an SD3 model - they're not intercompatible

I do not. I have the VAE option selected as none/off

mcmonkey4eva commented 5 months ago

Is it maybe an outdated Swarm install? Try Server info -> update and restart If it's trying to run sd3 like it's an XL model it would cause that error message too

ByblosHex commented 5 months ago

Is it maybe an outdated Swarm install? Try Server info -> update and restart If it's trying to run sd3 like it's an XL model it would cause that error message too

I can run the update again, but it was up to date as of yesterday.,

ByblosHex commented 5 months ago

18:39:45.920 [Info] Swarm is up to date! Version 0.6.4.0 is the latest.

mcmonkey4eva commented 5 months ago

Use the update button, needs latest git commit not just the latest release version

also: check that Type is shown properly on the model: image-3

ByblosHex commented 5 months ago

Use the update button, needs latest git commit not just the latest release version

also: check that Type is shown properly on the model: image-3

Which update button? Cuz I've already used this one and the update-windows.bat in the installation directory image image

The SD3 model itself works fine, it's just when I try to use it in a grid configuration that it doesn't load.

image

mcmonkey4eva commented 5 months ago

OH I bet what happened is there's an automatic XL VAE set, and when the grid runs it tries to apply the one automatic VAE to all models -- the above commit reworks how that applies, so if that's what hit then updating will fix the bug

ByblosHex commented 5 months ago

OH I bet what happened is there's an automatic XL VAE set, and when the grid runs it tries to apply the one automatic VAE to all models -- the above commit reworks how that applies, so if that's what hit then updating will fix the bug

In the meantime, I am able to work-around this by generating the SD3 images first in the grid and then the SDXL and then the SD1.5 images last. That works. If I try to SD3 after SDXL it produces the error.

mcmonkey4eva commented 5 months ago

Yeah that sounds like it'd be the issue I found there then. So, yeah, update and you should be good

XeroCreator commented 3 weeks ago

Use the update button, needs latest git commit not just the latest release version

also: check that Type is shown properly on the model: image-3

Since you mentioned this here, i wanted to ask what to do if the type says (unset)? Is there a way to correct that easily? May help others that search and find this issue before i make another one (same error as this one but mine says unset) I'm using sd 3.5 but sd 3 medium says unset too. odd-error-after-generating-an-image-all-available-backends-v0-1ovu2xpkz0yd1 Adding this for reference.