comfyanonymous / ComfyUI

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
42.64k stars 4.51k forks source link

After use sd3_medium_incl_clips_t5xxlfp16.safetensors model , ComfyUI is disconnected. #3911

Open Desperado1001 opened 2 weeks ago

Desperado1001 commented 2 weeks ago

Your question

After use sd3_medium_incl_clips_t5xxlfp16.safetensors model , ComfyUI is disconnected.But other model,such as dreamshaperXL_v21TurboDPMSDE.safetensors can run right.

Logs

G:\comfyUI\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
Total VRAM 12288 MB, total RAM 32605 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
Using pytorch cross attention

Import times for custom nodes:
   0.0 seconds: G:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main
   0.0 seconds: G:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE

G:\comfyUI\ComfyUI_windows_portable>pause
请按任意键继续. . .

Other

No response

mcmonkey4eva commented 2 weeks ago

Does the same happen using a non-T5 model?

It's likely just you're using enough of your RAM on other things that loading the full fat model with the big fat T5 on it is pushing you over the limit and the process is crashing from running out of memory.

Look at the resource usage tab of task manager while loading to see if the charts spike up to the top.

mo-bai commented 2 weeks ago

Does the same happen using a non-T5 model?

It's likely just you're using enough of your RAM on other things that loading the full fat model with the big fat T5 on it is pushing you over the limit and the process is crashing from running out of memory.很可能只是你在其他事情上使用了足够多的内存,而加载带有大容量 T5 的全脂模型使你的内存超过了极限,进程因内存耗尽而崩溃。

Look at the resource usage tab of task manager while loading to see if the charts spike up to the top.在加载时查看任务管理器的资源使用情况选项卡,看看图表是否飙升到顶部。

It's happening to me, too. I tried sd3_medium_incl_clips_t5xxlfp16.safetensors model and sd3_medium_incl_clips_t5xxlfp8.safetensors model.

I checked the resource usage tab of task manager while loading. and then the cpu and memory peak usage is less than 50% and the GPU usage is hardly up at all

log

E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
Total VRAM 16380 MB, total RAM 32607 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
Using pytorch cross attention

Import times for custom nodes:
   0.0 seconds: E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE

E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable>pause
huangkun1985 commented 2 weeks ago

same issues happened to me, i cannot fix it, here is the coding:

D:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build Total VRAM 24564 MB, total RAM 32606 MB pytorch version: 2.3.1+cu121 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync VAE dtype: torch.bfloat16 Using pytorch cross attention Starting server

To see the GUI go to: http://127.0.0.1:8188 got prompt model_type FLOW Using pytorch attention in VAE Using pytorch attention in VAE

D:\ComfyUI_windows_portable>pause 请按任意键继续. . .