comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
58.76k stars 6.23k forks source link

Press a key to continue . . . #4902

Open ZeroCool22 opened 2 months ago

ZeroCool22 commented 2 months ago

Expected Behavior

Create image without error.

Actual Behavior

Not showing any errors, just Press a key to continue . . .

Steps to Reproduce

Workflow.

Screenshot_3

Debug Logs

C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-09-12 19:20:32.868037
** Platform: Windows
** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
** Python executable: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\python_embeded\python.exe
** ComfyUI Path: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI
** Log path: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\rgthree-comfy
   1.5 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 16376 MB, total RAM 32680 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\web
### Loading: ComfyUI-Impact-Pack (V7.5)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.6)
[Impact Pack] Wildcards loading done.
Total VRAM 16376 MB, total RAM 32680 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti SUPER : cudaMallocAsync
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 2683 [b962db99] | Released on '2024-09-12'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
------------------------------------------
Comfyroll Studio v1.76 :  175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------

[rgthree] Loaded 42 epic nodes.
[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.

WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 218 nodes successfully.

        "Don't wait. The time will never be just right." - Napoleon Hill

Import times for custom nodes:
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\cg-use-everywhere
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-mxToolkit
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\comfyui_segment_anything
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\comfy-image-saver
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Chibi-Nodes
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-GGUF
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\efficiency-nodes-comfyui
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\rgthree-comfy
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_essentials
   0.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
   0.1 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-KJNodes
   0.4 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Manager
   0.5 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-SAM2
   1.0 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
   1.6 seconds: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\was-node-suite-comfyui

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
got prompt
got prompt
got prompt
C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py:79: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:212.)
  torch_tensor = torch.from_numpy(tensor.data) # mmap

ggml_sd_loader:
 0                             466
 8                             304
 1                              10
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
Requested to load FluxClipModel_
Loading 1 new model
C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Requested to load Flux
Loading 1 new model
loaded completely 0.0 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:28<00:00,  2.88s/it]
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 159.87335777282715 True
Prompt executed in 124.04 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.47s/it]
Prompt executed in 26.19 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.48s/it]
Prompt executed in 25.86 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.47s/it]
Prompt executed in 25.85 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.49s/it]
Prompt executed in 25.96 seconds
got prompt
got prompt
got prompt
got prompt
got prompt
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:25<00:00,  2.50s/it]
Prompt executed in 26.78 seconds
loaded completely 13323.958600265503 12125.320556640625 True
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00,  2.47s/it]
Prompt executed in 25.83 seconds
loaded completely 13323.958600265503 12125.320556640625 True
 30%|████████████████████████▉                                                          | 3/10 [00:10<00:23,  3.35s/it]
Processing interrupted
Prompt executed in 10.41 seconds
got prompt
got prompt
got prompt
got prompt
got prompt

C:\Users\ZeroCool22\Desktop\SwarmUI\dlbackend\comfy>pause
Presione una tecla para continuar . . .

Other

No response

ZeroCool22 commented 2 months ago

Again...

Screenshot_5

bulutharbeli commented 2 months ago

I have the same problem since last week. I have tried to use several different workflows but it keeps disconnect all the time.

ZeroCool22 commented 2 months ago

I have the same problem since last week. I have tried to use several different workflows but it keeps disconnect all the time.

Ok, so we need to find the commit that don't have this problem and do a checkout...

yingxiongyanjin commented 2 months ago

again...

mcmonkey4eva commented 2 months ago

check task manager / system resource usage. That type of sudden hard crash usually indicates you ran of out system RAM

bramvera commented 2 months ago

I have the same problem just now nope, system RAM is only 43% image

EDIT: added video with hardware monitoring https://github.com/user-attachments/assets/a132b7f0-14de-450e-a929-f3101704a5a9

check task manager / system resource usage. That type of sudden hard crash usually indicates you ran of out system RAM

humanm1372 commented 2 months ago

I have the same problem on flux dev and schnel on both fp8 and 16 I checked the task manager for running out of resources but its not the issue. I also used workflow examples. my setup i7 10700K- RTX 3060 12GB, 32 GB 3600 MHz, Win 10 . I guess I can use the dev on fp16. cant I? somebody has a solution? i'll be greatfull

task1

bramvera commented 2 months ago

ok found a solution, at least for me fixed this issue:

after rebooted set back again to System managed size, click Set, reboot one more time image

I'm no longer have the ComfyUI sudden crash issue

ltdrdata commented 2 months ago

I have the same problem on flux dev and schnel on both fp8 and 16 I checked the task manager for running out of resources but its not the issue. I also used workflow examples. my setup i7 10700K- RTX 3060 12GB, 32 GB 3600 MHz, Win 10 . I guess I can use the dev on fp16. cant I? somebody has a solution? i'll be greatfull

task1

Try uninstalling pytorch and changing its version. There have been reports of such issues occurring due to pytorch.