comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
58.68k stars 6.23k forks source link

--fast seems to be broken after recent update #5291

Closed huangzhike closed 1 month ago

huangzhike commented 1 month ago

Expected Behavior

--fast should be faster

Actual Behavior

I've noticed that after the recent update, the inference speed of Flux has slowed down, and there's no difference in speed between using the --fast option and not using it, my device being RTX4090

Steps to Reproduce

using t5xxl_fp8_e4m3fn.safetensors, fp8_e4m3fn_fast

Debug Logs

## ComfyUI-Manager: installing dependencies done.
[2024-10-19 22:43:58.414] ** ComfyUI startup time: 2024-10-19 22:43:58.414342
[2024-10-19 22:43:58.423] ** Platform: Windows
[2024-10-19 22:43:58.424] ** Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
[2024-10-19 22:43:58.424] ** Python executable: D:\digit\ComfyUI\venv\Scripts\python.exe
[2024-10-19 22:43:58.424] ** ComfyUI Path: D:\digit\ComfyUI
[2024-10-19 22:43:58.424] ** Log path: D:\digit\ComfyUI\comfyui.log
[2024-10-19 22:43:58.432] 
Prestartup times for custom nodes:
[2024-10-19 22:43:58.432]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\rgthree-comfy
[2024-10-19 22:43:58.432]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Easy-Use
[2024-10-19 22:43:58.432]    3.8 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-10-19 22:43:58.432] 
Total VRAM 24564 MB, total RAM 130918 MB
[2024-10-19 22:44:00.110] pytorch version: 2.4.1+cu124
[2024-10-19 22:44:01.090] D:\digit\ComfyUI\venv\lib\site-packages\xformers\ops\fmha\flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_fwd")
[2024-10-19 22:44:01.355] D:\digit\ComfyUI\venv\lib\site-packages\xformers\ops\fmha\flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_bwd")
[2024-10-19 22:44:01.525] xformers version: 0.0.27.post2
[2024-10-19 22:44:01.525] Set vram state to: NORMAL_VRAM
[2024-10-19 22:44:01.525] Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
[2024-10-19 22:44:01.793] Using xformers cross attention
[2024-10-19 22:44:02.697] [Prompt Server] web root: D:\digit\ComfyUI\web
[2024-10-19 22:44:02.699] Adding extra search path checkpoints E:/work_space/stable-diffusion-webui/models/Stable-diffusion\
[2024-10-19 22:44:02.699] Adding extra search path vae E:/work_space/stable-diffusion-webui/models/VAE
[2024-10-19 22:44:02.699] Adding extra search path loras E:/work_space/stable-diffusion-webui/models/Lora
[2024-10-19 22:44:02.699] Adding extra search path loras E:/work_space/stable-diffusion-webui/models/LyCORIS
[2024-10-19 22:44:02.699] Adding extra search path upscale_models E:/work_space/stable-diffusion-webui/models/ESRGAN
[2024-10-19 22:44:02.699] Adding extra search path upscale_models E:/work_space/stable-diffusion-webui/models/RealESRGAN
[2024-10-19 22:44:02.699] Adding extra search path upscale_models E:/work_space/stable-diffusion-webui/models/SwinIR
[2024-10-19 22:44:02.699] Adding extra search path embeddings E:/work_space/stable-diffusion-webui/embeddings
[2024-10-19 22:44:02.699] Adding extra search path hypernetworks E:/work_space/stable-diffusion-webui/models/hypernetworks
[2024-10-19 22:44:02.699] Adding extra search path controlnet E:/work_space/stable-diffusion-webui/models/ControlNet
[2024-10-19 22:44:02.699] Adding extra search path checkpoints E:\Fooocus\models/checkpoints
[2024-10-19 22:44:02.699] Adding extra search path unet E:\Fooocus\models/checkpoints
[2024-10-19 22:44:02.699] Adding extra search path vae E:\Fooocus\models/vae
[2024-10-19 22:44:02.699] Adding extra search path loras E:\Fooocus\models/loras
[2024-10-19 22:44:02.699] Adding extra search path clip E:/work_space/stable-diffusion-webui-forge/models/text_encoder/
[2024-10-19 22:44:02.927] C:\Users\RTX4090\AppData\Local\Programs\Python\Python310\lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
[2024-10-19 22:44:03.338] [Crystools INFO] Crystools version: 1.19.0
[2024-10-19 22:44:03.351] [Crystools INFO] CPU: 12th Gen Intel(R) Core(TM) i9-12900KF - Arch: AMD64 - OS: Windows 10
[2024-10-19 22:44:03.360] [Crystools INFO] Pynvml (Nvidia) initialized.
[2024-10-19 22:44:03.360] [Crystools INFO] GPU/s:
[2024-10-19 22:44:03.381] [Crystools INFO] 0) NVIDIA GeForce RTX 4090
[2024-10-19 22:44:03.381] [Crystools INFO] NVIDIA Driver: 560.94
[2024-10-19 22:44:06.038] Note: NumExpr detected 24 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 16.
[2024-10-19 22:44:06.038] NumExpr defaulting to 16 threads.
[2024-10-19 22:44:07.271] [ComfyUI-Easy-Use] server: v1.2.4 Loaded
[2024-10-19 22:44:07.271] [ComfyUI-Easy-Use] web root: D:\digit\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v2 Loaded
[2024-10-19 22:44:07.908] Total VRAM 24564 MB, total RAM 130918 MB
[2024-10-19 22:44:07.908] pytorch version: 2.4.1+cu124
[2024-10-19 22:44:07.908] xformers version: 0.0.27.post2
[2024-10-19 22:44:07.908] Set vram state to: NORMAL_VRAM
[2024-10-19 22:44:07.909] Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
[2024-10-19 22:44:08.127] ### Loading: ComfyUI-Impact-Pack (V7.10.3)
[2024-10-19 22:44:10.517] WARNING 鈿狅笍 Known issue with torch>=2.4.0 on Windows with CPU, recommend downgrading to torch<=2.3.1 to resolve https://github.com/ultralytics/ultralytics/issues/15049
[2024-10-19 22:44:10.580] ### Loading: ComfyUI-Impact-Pack (Subpack: V0.7)
[2024-10-19 22:44:10.608] [Impact Pack] Wildcards loading done.
[2024-10-19 22:44:10.617] ### Loading: ComfyUI-Inspire-Pack (V1.5.1)
[2024-10-19 22:44:10.743] ### Loading: ComfyUI-Manager (V2.51.8)
[2024-10-19 22:44:10.921] ### ComfyUI Revision: 2843 [3ee3c574] | Released on '2024-10-19'
[2024-10-19 22:44:11.259] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2024-10-19 22:44:11.269] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[2024-10-19 22:44:11.276] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2024-10-19 22:44:11.297] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2024-10-19 22:44:11.316] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[2024-10-19 22:44:16.478] ------------------------------------------
[2024-10-19 22:44:16.478] Comfyroll Studio v1.76 :  175 Nodes Loaded
[2024-10-19 22:44:16.478] ------------------------------------------
[2024-10-19 22:44:16.478] ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
[2024-10-19 22:44:16.478] ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
[2024-10-19 22:44:16.478] ------------------------------------------
[2024-10-19 22:44:16.493] [comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\digit\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[2024-10-19 22:44:16.493] [comfyui_controlnet_aux] | INFO -> Using symlinks: False
[2024-10-19 22:44:16.493] [comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
[2024-10-19 22:44:16.514] DWPose: Onnxruntime with acceleration providers detected
[2024-10-19 22:44:16.612] 
[2024-10-19 22:44:16.612] [rgthree-comfy] Loaded 42 magnificent nodes. 馃帀
[2024-10-19 22:44:16.613] 
[2024-10-19 22:44:19.068] WAS Node Suite: OpenCV Python FFMPEG support is enabled
[2024-10-19 22:44:19.068] WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `D:\digit\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
[2024-10-19 22:44:21.287] WAS Node Suite: Finished. Loaded 218 nodes successfully.
[2024-10-19 22:44:21.287] 
    "Success is not final, failure is not fatal: It is the courage to continue that counts." - Winston Churchill
[2024-10-19 22:44:21.287] 
[2024-10-19 22:44:21.313] 
Import times for custom nodes:
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\websocket_image_save.py
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Adaptive-Guidance
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\sd-dynamic-thresholding
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\Comfyui_TTP_Toolset
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Fluxpromptenhancer
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\wlsh_nodes
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI_TensorRT
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\comfyui-browser
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\OneButtonPrompt
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\comfy-image-saver
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI_essentials
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\x-flux-comfyui
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Kolors-MZ
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\rgthree-comfy
[2024-10-19 22:44:21.313]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-KJNodes
[2024-10-19 22:44:21.314]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
[2024-10-19 22:44:21.314]    0.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
[2024-10-19 22:44:21.314]    0.1 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-DynamiCrafterWrapper
[2024-10-19 22:44:21.314]    0.1 seconds: D:\digit\ComfyUI\custom_nodes\comfyui_controlnet_aux
[2024-10-19 22:44:21.314]    0.1 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Crystools
[2024-10-19 22:44:21.314]    0.2 seconds: D:\digit\ComfyUI\custom_nodes\comfyui-tensorops
[2024-10-19 22:44:21.314]    0.2 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Florence2
[2024-10-19 22:44:21.314]    0.5 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-10-19 22:44:21.314]    0.6 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-EasyAnimateWrapper
[2024-10-19 22:44:21.314]    0.7 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Easy-Use
[2024-10-19 22:44:21.314]    1.1 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-SUPIR
[2024-10-19 22:44:21.314]    2.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt
[2024-10-19 22:44:21.314]    2.0 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
[2024-10-19 22:44:21.314]    2.5 seconds: D:\digit\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
[2024-10-19 22:44:21.314]    3.1 seconds: D:\digit\ComfyUI\custom_nodes\comfyui-dynamicprompts
[2024-10-19 22:44:21.314]    4.7 seconds: D:\digit\ComfyUI\custom_nodes\was-node-suite-comfyui
[2024-10-19 22:44:21.314] 
[2024-10-19 22:44:21.328] Starting server

[2024-10-19 22:44:21.328] To see the GUI go to: http://127.0.0.1:8188
[2024-10-19 22:44:37.392] FETCH DATA from: D:\digit\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[2024-10-19 22:44:40.071] []
[2024-10-19 22:44:40.071] []
[2024-10-19 22:44:40.136] FETCH DATA from: D:\digit\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[2024-10-19 22:44:41.500] []
[2024-10-19 22:44:41.500] []
[2024-10-19 22:45:51.009] got prompt
[2024-10-19 22:45:51.052] Using xformers attention in VAE
[2024-10-19 22:45:51.054] Using xformers attention in VAE
[2024-10-19 22:45:51.167]     VAEae.safetensors
[2024-10-19 22:45:51.263] _CudaDeviceProperties(name='NVIDIA GeForce RTX 4090', major=8, minor=9, total_memory=24563MB, multi_processor_count=128)
[2024-10-19 22:45:51.263] 2.4.1+cu124
[2024-10-19 22:45:51.315] model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
[2024-10-19 22:45:51.316] model_type FLUX
[2024-10-19 22:46:02.494]     UNETLoaderflux1-dev.safetensors*fp8_e4m3fn_fast
[2024-10-19 22:46:02.558] C:\Users\RTX4090\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
[2024-10-19 22:46:03.844] clip missing: ['text_projection.weight']
[2024-10-19 22:46:04.341]     DualCLIPLoadert5xxl_fp8_e4m3fn.safetensors*clip_l.safetensors
[2024-10-19 22:46:04.350] Requested to load FluxClipModel_
[2024-10-19 22:46:04.350] Loading 1 new model
[2024-10-19 22:46:05.346] loaded completely 0.0 4777.53759765625 True
[2024-10-19 22:46:05.545] D:\digit\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
[2024-10-19 22:46:05.696] Requested to load Flux
[2024-10-19 22:46:05.696] Loading 1 new model
[2024-10-19 22:46:08.298] loaded completely 0.0 11350.048889160156 True
[2024-10-19 22:46:16.254] 
100%|    | 20/20 [00:07<00:00,  2.48it/s]
100%|    | 20/20 [00:07<00:00,  2.52it/s]
[2024-10-19 22:46:16.402] Requested to load AutoencodingEngine
[2024-10-19 22:46:16.402] Loading 1 new model
[2024-10-19 22:46:16.534] loaded completely 0.0 159.87335777282715 True
[2024-10-19 22:46:16.910] 547f33b2-15b4-464a-a777-8c6acbccfb12 Prompt executed in 25.89 seconds
[2024-10-19 22:46:18.685] got prompt
[2024-10-19 22:46:26.939] 
100%|    | 20/20 [00:08<00:00,  2.48it/s]
100%|    | 20/20 [00:08<00:00,  2.44it/s]
[2024-10-19 22:46:27.354] 1103db23-1fad-4c5f-bff4-e69cc8332e62 Prompt executed in 8.65 seconds
[2024-10-19 22:46:28.927] got prompt
[2024-10-19 22:46:37.121] 
100%|    | 20/20 [00:08<00:00,  2.45it/s]
100%|    | 20/20 [00:08<00:00,  2.46it/s]
[2024-10-19 22:46:37.515] 652fb80b-fff1-4737-917c-cc04f843df49 Prompt executed in 8.56 seconds
[2024-10-19 22:46:55.882] got prompt
[2024-10-19 22:47:04.003] 
100%|    | 20/20 [00:08<00:00,  2.49it/s]
100%|    | 20/20 [00:08<00:00,  2.48it/s]
[2024-10-19 22:47:04.420] 838e980b-46b3-4d8b-92b8-d21c2a8c7530 Prompt executed in 8.52 seconds
[2024-10-19 22:48:56.134] got prompt
[2024-10-19 22:49:04.279] 
100%|    | 20/20 [00:08<00:00,  2.49it/s]
100%|    | 20/20 [00:08<00:00,  2.46it/s]
[2024-10-19 22:49:04.642] 813d3e7e-8343-4346-a403-d20aa1a5ea5a Prompt executed in 8.50 seconds
[2024-10-19 23:15:06.947] got prompt
[2024-10-19 23:15:15.414] 
100%|    | 20/20 [00:08<00:00,  2.42it/s]
100%|    | 20/20 [00:08<00:00,  2.37it/s]
[2024-10-19 23:15:15.841] ec020600-a06a-482f-b3e3-b0bb9dbf7b7f Prompt executed in 8.88 seconds

Other

No response

comfyanonymous commented 1 month ago

If you are generating at 1024x1024 your speed is good. There is no difference between --fast and fp8_e4m3fn_fast, they are the same thing.