Stability-AI / StableSwarmUI

StableSwarmUI, A Modular Stable Diffusion Web-User-Interface, with an emphasis on making powertools easily accessible, high performance, and extensibility.
MIT License
4.59k stars 369 forks source link

Internal error processing T2I request: System.Net.WebSockets.WebSocketException (0x80004005) #208

Closed Joly0 closed 10 months ago

Joly0 commented 11 months ago

Hey guys, i am trying to generate images and it worked so far, but now i constantly get this error message:

15:44:34.427 [Error] Internal error processing T2I request: System.Net.WebSockets.WebSocketException (0x80004005): The remote party closed the WebSocket connection without completing the close handshake.
   at System.Net.WebSockets.ManagedWebSocket.ThrowEOFUnexpected()
   at System.Net.WebSockets.ManagedWebSocket.EnsureBufferContainsAsync(Int32 minimumRequiredBytes, CancellationToken cancellationToken)
   at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
   at System.Net.WebSockets.ManagedWebSocket.ReceiveAsyncPrivate[TResult](Memory`1 payloadBuffer, CancellationToken cancellationToken)
   at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
   at System.Threading.Tasks.ValueTask`1.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state)
--- End of stack trace from previous location ---
   at StableSwarmUI.Utils.Utilities.ReceiveData(WebSocket socket, Int32 maxBytes, CancellationToken limit) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/Utils/Utilities.cs:line 179
   at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.AwaitJobLive(String workflow, String batchId, Action`1 takeOutput, CancellationToken interrupt) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/BuiltinExtensions/ComfyUIBackend/ComfyUIAPIAbstractBackend.cs:line 348
   at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.GenerateLive(T2IParamInput user_input, String batchId, Action`1 takeOutput) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/BuiltinExtensions/ComfyUIBackend/ComfyUIAPIAbstractBackend.cs:line 611
   at StableSwarmUI.Text2Image.T2IEngine.CreateImageTask(T2IParamInput user_input, String batchId, GenClaim claim, Action`1 output, Action`1 setError, Boolean isWS, Single backendTimeoutMin, Action`2 saveImages, Boolean canCallTools) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/Text2Image/T2IEngine.cs:line 220
15:44:34.427 [Debug] Refused to generate image for local: Something went wrong while generating images.

I am not sure why this is happening. There is also this debug message: 15:44:34.426 [Debug] Failed to process comfy workflow

mcmonkey4eva commented 11 months ago

Go to Server -> Logs -> and set View Type to Debug, see if there's more to the error. Most likely the source of the error will come from the comfy backend's logs

Joly0 commented 11 months ago

I did that already. The output is the one that shows up after enabling debug

Joly0 commented 11 months ago

Here is the complete log after opening the webui and clicking generate:

17:24:30.186 [Debug] [BackendHandler] backend #0 will load a model: dreamshaperXL_turboDpmppSDE.safetensors, with 1 requests waiting for 0 seconds
17:24:30.186 [Debug] [BackendHandler] Backend request #22 for model dreamshaperXL_turboDpmppSDE.safetensors, maxWait=7.00:00:00.
17:24:30.192 [Debug] ComfyUI-0 on port 7834 stdout: got prompt
17:24:30.490 [Debug] ComfyUI-0 on port 7834 stdout: model_type EPS
17:24:30.490 [Debug] ComfyUI-0 on port 7834 stdout: adm 2816
17:24:32.405 [Debug] ComfyUI-0 on port 7834 stdout: Using pytorch attention in VAE
17:24:32.406 [Debug] ComfyUI-0 on port 7834 stdout: Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
17:24:32.406 [Debug] ComfyUI-0 on port 7834 stdout: Using pytorch attention in VAE
17:24:33.791 [Debug] ComfyUI-0 on port 7834 stdout: missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
17:24:33.793 [Debug] ComfyUI-0 on port 7834 stdout: left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection'])
17:24:33.950 [Debug] ComfyUI-0 on port 7834 stdout: Requested to load AutoencoderKL
17:24:33.951 [Debug] ComfyUI-0 on port 7834 stdout: Loading 1 new model
17:24:34.367 [Debug] ComfyUI-0 on port 7834 stdout: Prompt executed in 4.17 seconds
17:24:34.475 [Debug] [BackendHandler] backend #0 loaded model, returning to pool
17:24:35.187 [Debug] [BackendHandler] Backend request #22 found correct model on #0
17:24:35.187 [Debug] [BackendHandler] Backend request #22 finished.
17:24:35.188 [Debug] ComfyUI-0 on port 7834 stdout: got prompt
17:24:35.565 [Debug] ComfyUI-0 on port 7834 stdout: Requested to load SDXLClipModel
17:24:35.566 [Debug] ComfyUI-0 on port 7834 stdout: Loading 1 new model
17:24:37.386 [Debug] ComfyUI-0 on port 7834 stdout: Requested to load SDXL
17:24:37.386 [Debug] ComfyUI-0 on port 7834 stdout: Loading 1 new model
17:24:41.065 [Info] Self-Start ComfyUI-0 on port 7834 unexpectedly exited (if something failed, change setting `LogLevel` to `Debug` to see why!)
17:24:41.066 [Debug] Status of ComfyUI-0 on port 7834 after process end is RUNNING
17:24:41.086 [Debug] Failed to process comfy workflow: {
  "4": {
    "class_type": "CheckpointLoaderSimple",
    "inputs": {
      "ckpt_name": "dreamshaperXL_turboDpmppSDE.safetensors"
    }
  },
  "100": {
    "class_type": "LoraLoader",
    "inputs": {
      "model": [
        "4",
        0
      ],
      "clip": [
        "4",
        1
      ],
      "lora_name": "star wars style.safetensors",
      "strength_model": 1.5,
      "strength_clip": 1.5
    }
  },
  "5": {
    "class_type": "EmptyLatentImage",
    "inputs": {
      "batch_size": 1,
      "height": 1024,
      "width": 1024
    }
  },
  "101": {
    "class_type": "CLIPTextEncode",
    "inputs": {
      "clip": [
        "100",
        1
      ],
      "text": "(photographic:1.3), (RAW photo:1.3), (ultra wide lens:1.3), (close-up:1.3), (male), (handsome:1.2), (Space smuggler:1.3), (no gravity:1.1), (32 year old), (devious), (laser blaster), small scar on his face, (standing in a space ship), wearing han solo clothing"
    }
  },
  "102": {
    "class_type": "CLIPTextEncode",
    "inputs": {
      "clip": [
        "100",
        1
      ],
      "text": "CyberRealistic_Negative, (smile), looking at camera, selfie, facing camera, looking at viewer"
    }
  },
  "10": {
    "class_type": "SwarmKSampler",
    "inputs": {
      "model": [
        "100",
        0
      ],
      "noise_seed": 1854379801,
      "steps": 5,
      "cfg": 2.0,
      "sampler_name": "dpmpp_sde",
      "scheduler": "normal",
      "positive": [
        "101",
        0
      ],
      "negative": [
        "102",
        0
      ],
      "latent_image": [
        "5",
        0
      ],
      "start_at_step": 0,
      "end_at_step": 10000,
      "return_with_leftover_noise": "disable",
      "add_noise": "enable",
      "var_seed": 0,
      "var_seed_strength": 0.0,
      "sigma_min": -1.0,
      "sigma_max": -1.0,
      "rho": 7.0,
      "previews": "default"
    }
  },
  "8": {
    "class_type": "VAEDecode",
    "inputs": {
      "vae": [
        "4",
        2
      ],
      "samples": [
        "10",
        0
      ]
    }
  },
  "9": {
    "class_type": "SwarmSaveImageWS",
    "inputs": {
      "images": [
        "8",
        0
      ]
    }
  }
} for inputs T2IParamInput(prompt: (photographic:1.3), (RAW photo:1.3), (ultra wide lens:1.3), (close-up:1.3), (male), (handsome:1.2), (Space smuggler:1.3), (no gravity:1.1), (32 year old), (devious), (laser blaster), small scar on his face, (standing in a space ship), wearing han solo clothing, negativeprompt: CyberRealistic_Negative, (smile), looking at camera, selfie, facing camera, looking at viewer, seed: 1854379801, steps: 5, cfgscale: 2, aspectratio: 1:1, width: 1024, height: 1024, model: dreamshaperXL_turboDpmppSDE.safetensors, loraweights: System.Collections.Generic.List`1[System.String], loras: System.Collections.Generic.List`1[System.String], sampler: dpmpp_sde)
17:24:41.088 [Error] Internal error processing T2I request: System.Net.WebSockets.WebSocketException (0x80004005): The remote party closed the WebSocket connection without completing the close handshake.
   at System.Net.WebSockets.ManagedWebSocket.ThrowEOFUnexpected()
   at System.Net.WebSockets.ManagedWebSocket.EnsureBufferContainsAsync(Int32 minimumRequiredBytes, CancellationToken cancellationToken)
   at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
   at System.Net.WebSockets.ManagedWebSocket.ReceiveAsyncPrivate[TResult](Memory`1 payloadBuffer, CancellationToken cancellationToken)
   at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
   at System.Threading.Tasks.ValueTask`1.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state)
--- End of stack trace from previous location ---
   at StableSwarmUI.Utils.Utilities.ReceiveData(WebSocket socket, Int32 maxBytes, CancellationToken limit) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/Utils/Utilities.cs:line 179
   at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.AwaitJobLive(String workflow, String batchId, Action`1 takeOutput, CancellationToken interrupt) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/BuiltinExtensions/ComfyUIBackend/ComfyUIAPIAbstractBackend.cs:line 348
   at StableSwarmUI.Builtin_ComfyUIBackend.ComfyUIAPIAbstractBackend.GenerateLive(T2IParamInput user_input, String batchId, Action`1 takeOutput) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/BuiltinExtensions/ComfyUIBackend/ComfyUIAPIAbstractBackend.cs:line 611
   at StableSwarmUI.Text2Image.T2IEngine.CreateImageTask(T2IParamInput user_input, String batchId, GenClaim claim, Action`1 output, Action`1 setError, Boolean isWS, Single backendTimeoutMin, Action`2 saveImages, Boolean canCallTools) in /opt/stable-diffusion/07-StableSwarm/StableSwarmUI/src/Text2Image/T2IEngine.cs:line 287
17:24:41.088 [Debug] Refused to generate image for local: Something went wrong while generating images.
Joly0 commented 11 months ago

It appears to me, that this might have soemthing to do with the lora https://civitai.com/models/206988/star-wars-style i use. If i remove it, it works. If i add it back in, it errors out. Though i was already able to generate images with this lora before and it randomly then started to error out eversy 2-3 images and now it happens every time

Joly0 commented 11 months ago

Ok, i tried another lora, and same result

mcmonkey4eva commented 11 months ago
17:24:37.386 [Debug] ComfyUI-0 on port 7834 stdout: Loading 1 new model
17:24:41.065 [Info] Self-Start ComfyUI-0 on port 7834 unexpectedly exited

It looks like Comfy is hardcrashing while loading models - this type of instant hardcrash usually indicates that you're low on available system RAM.

if that's the case, you can probably make it work by restarting your PC and/or closing unrelated programs

Joly0 commented 11 months ago

Hm, i dont think thats RAM issue. I have 64GB of ram, so shouldnt be an issue. If you mean VRAM then maybe yes, the gpu only has 4GB of VRAM. But had no issues so far, only now and only after a while

mcmonkey4eva commented 11 months ago

Oh. Huh. Usually VRAM errors manifest with a clear error message. System RAM errors manifest as instant silent crash. I'm not aware of any other issue that triggers an instant silent crash like that.

Maybe there's some form of memory leak involved? (ie RAM keeps building up until eventually it floods past your full 64GiB and dies) You might keep top or watch free -h open on the side while testing to see if there is indeed a RAM overload.

If there's not, tbh I have no idea what else could cause just an instant full process shutdown with no terminal output.