XLabs-AI / x-flux-comfyui

Apache License 2.0
821 stars 57 forks source link

求助:更新到最新版本后采样器XlabsSampler报异常。 #121

Open dming519 opened 6 days ago

dming519 commented 6 days ago

下面附上截图和错误信息: image

ComfyUI Error Report

Error Details

## System Information
- **ComfyUI Version:** v0.2.2-43-ge813abb
- **Arguments:** H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\main.py --preview-method auto --disable-cuda-malloc
- **OS:** nt
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.3.1+cu121
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 2080 Ti : native
  - **Type:** cuda
  - **VRAM Total:** 23621992448
  - **VRAM Free:** 1471198208
  - **Torch VRAM Total:** 20900216832
  - **Torch VRAM Free:** 142578688

## Logs

2024-09-16 09:02:38,732 - root - INFO - Total VRAM 22528 MB, total RAM 98140 MB 2024-09-16 09:02:38,732 - root - INFO - pytorch version: 2.3.1+cu121 2024-09-16 09:02:40,908 - root - INFO - xformers version: 0.0.27 2024-09-16 09:02:40,930 - root - INFO - Set vram state to: NORMAL_VRAM 2024-09-16 09:02:40,930 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : native 2024-09-16 09:02:41,250 - root - INFO - Using xformers cross attention 2024-09-16 09:02:43,091 - root - INFO - [Prompt Server] web root: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\web 2024-09-16 09:02:43,095 - root - INFO - Adding extra search path checkpoints H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/Stable-diffusion 2024-09-16 09:02:43,095 - root - INFO - Adding extra search path configs H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/Stable-diffusion 2024-09-16 09:02:43,095 - root - INFO - Adding extra search path vae H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/VAE 2024-09-16 09:02:43,095 - root - INFO - Adding extra search path loras H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/Lora 2024-09-16 09:02:43,095 - root - INFO - Adding extra search path loras H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/LyCORIS 2024-09-16 09:02:43,095 - root - INFO - Adding extra search path upscale_models H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/ESRGAN 2024-09-16 09:02:43,096 - root - INFO - Adding extra search path upscale_models H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/RealESRGAN 2024-09-16 09:02:43,096 - root - INFO - Adding extra search path upscale_models H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/SwinIR 2024-09-16 09:02:43,096 - root - INFO - Adding extra search path embeddings H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\embeddings 2024-09-16 09:02:43,096 - root - INFO - Adding extra search path hypernetworks H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/hypernetworks 2024-09-16 09:02:43,096 - root - INFO - Adding extra search path controlnet H:\StableDiffusion\stablediffusion\sd-webui-aki-v4\models/ControlNet 2024-09-16 09:02:44,701 - root - WARNING - Traceback (most recent call last): File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\nodes.py", line 1994, in load_custom_node module_spec.loader.exec_module(module) File "", line 879, in exec_module File "", line 1016, in get_code File "", line 1073, in get_data FileNotFoundError: [Errno 2] No such file or directory: 'H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\.idea\init.py'

2024-09-16 09:02:44,701 - root - WARNING - Cannot import H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes.idea module for custom nodes: [Errno 2] No such file or directory: 'H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\.idea\init.py' 2024-09-16 09:02:48,312 - root - INFO - Total VRAM 22528 MB, total RAM 98140 MB 2024-09-16 09:02:48,313 - root - INFO - pytorch version: 2.3.1+cu121 2024-09-16 09:02:48,313 - root - INFO - xformers version: 0.0.27 2024-09-16 09:02:48,313 - root - INFO - Set vram state to: NORMAL_VRAM 2024-09-16 09:02:48,313 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : native 2024-09-16 09:02:49,352 - root - WARNING - Traceback (most recent call last): File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\nodes.py", line 1994, in load_custom_node module_spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inference-Core-Nodes__init__.py", line 1, in from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS ModuleNotFoundError: No module named 'inference_core_nodes'

2024-09-16 09:02:49,352 - root - WARNING - Cannot import H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inference-Core-Nodes module for custom nodes: No module named 'inference_core_nodes' 2024-09-16 09:02:51,893 - root - INFO - -------------- 2024-09-16 09:02:51,893 - root - INFO -  ### Mixlab Nodes: Loaded 2024-09-16 09:02:51,893 - root - INFO - ChatGPT.available True 2024-09-16 09:02:51,894 - root - INFO - editmask.available True 2024-09-16 09:02:52,069 - root - INFO - ClipInterrogator.available True 2024-09-16 09:02:52,130 - root - INFO - PromptGenerate.available True 2024-09-16 09:02:52,130 - root - INFO - ChinesePrompt.available True 2024-09-16 09:02:52,130 - root - INFO - RembgNode.available True 2024-09-16 09:02:52,922 - root - INFO - TripoSR.available 2024-09-16 09:02:52,923 - root - INFO - MiniCPMNode.available 2024-09-16 09:02:53,155 - root - INFO - Scenedetect.available 2024-09-16 09:02:53,271 - root - INFO - FishSpeech.available 2024-09-16 09:02:53,272 - root - INFO -  --------------  2024-09-16 09:03:06,092 - root - INFO - Import times for custom nodes: 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-DS 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ControlNet-LLLite-ComfyUI 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\AIGODLIKE-ComfyUI-Translation 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_IPAdapter_plus 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\efficiency-nodes-comfyui 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\sdxl_prompt_styler 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\FreeU_Advanced 2024-09-16 09:03:06,092 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-BRIA_AI-RMBG 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_TiledKSampler 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\websocket_image_save.py 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\cg-use-everywhere 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\stability-ComfyUI-nodes 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\masquerade-nodes-comfyui 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-IC-Light-Native 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\cg-image-picker 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-WD14-Tagger 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-nodes-docs 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AutomaticCFG 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_experiments 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\PowerNoiseSuite 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\Comfyui-StableSR 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-TiledDiffusion 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-inpaint-nodes 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\images-grid-comfy-plugin 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-lama-remover 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-various 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-layerdiffuse 2024-09-16 09:03:06,093 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_UltimateSDUpscale 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Custom-Scripts 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds (IMPORT FAILED): H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inference-Core-Nodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_essentials 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyMath 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\Derfuu_ComfyUI_ModdedNodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-KJNodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui_bmad_nodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-IC-Light 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_tinyterraNodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Comfyroll_CustomNodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_bitsandbytes_NF4 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_smZNodes 2024-09-16 09:03:06,094 - root - INFO - 0.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-workspace-manager 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-dream-project 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\Comfyui_ALY 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inspire-Pack 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Crystools 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui_segment_anything 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\Comfyui-ergouzi-DGNJD 2024-09-16 09:03:06,094 - root - INFO - 0.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\CharacterFaceSwap 2024-09-16 09:03:06,094 - root - INFO - 0.2 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_LayerStyle 2024-09-16 09:03:06,094 - root - INFO - 0.2 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\failfast-comfyui-extensions 2024-09-16 09:03:06,094 - root - INFO - 0.2 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_InstantID 2024-09-16 09:03:06,095 - root - INFO - 0.3 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-BrushNet 2024-09-16 09:03:06,095 - root - INFO - 0.4 seconds (IMPORT FAILED): H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes.idea 2024-09-16 09:03:06,095 - root - INFO - 0.5 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_FizzNodes 2024-09-16 09:03:06,095 - root - INFO - 0.5 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-SUPIR 2024-09-16 09:03:06,095 - root - INFO - 0.6 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use 2024-09-16 09:03:06,095 - root - INFO - 0.8 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-BiRefNet-ZHO 2024-09-16 09:03:06,095 - root - INFO - 0.9 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager 2024-09-16 09:03:06,095 - root - INFO - 1.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Image-Filters 2024-09-16 09:03:06,095 - root - INFO - 1.0 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inspyrenet-Rembg 2024-09-16 09:03:06,095 - root - INFO - 1.4 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-art-venture 2024-09-16 09:03:06,095 - root - INFO - 1.9 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\comfyui-mixlab-nodes 2024-09-16 09:03:06,095 - root - INFO - 4.1 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Custom_Nodes_AlekPet 2024-09-16 09:03:06,095 - root - INFO - 6.6 seconds: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\was-node-suite-comfyui 2024-09-16 09:03:06,095 - root - INFO - 2024-09-16 09:03:06,116 - root - INFO -

2024-09-16 09:03:06,116 - root - INFO -

Starting server 2024-09-16 09:03:06,116 - root - INFO - To see the GUI go to: http://26.26.26.1:8188 or http://127.0.0.1:8188 2024-09-16 09:03:06,116 - root - INFO - To see the GUI go to: https://26.26.26.1:8189 or https://127.0.0.1:8189 2024-09-16 09:09:35,379 - root - INFO - got prompt 2024-09-16 09:09:36,034 - root - INFO - Using xformers attention in VAE 2024-09-16 09:09:36,037 - root - INFO - Using xformers attention in VAE 2024-09-16 09:09:46,690 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-09-16 09:09:46,721 - root - INFO - model_type EPS 2024-09-16 09:09:48,523 - root - INFO - Using xformers attention in VAE 2024-09-16 09:09:48,524 - root - INFO - Using xformers attention in VAE 2024-09-16 09:09:49,744 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-09-16 09:09:49,745 - root - INFO - model_type EPS 2024-09-16 09:09:54,812 - root - INFO - Using xformers attention in VAE 2024-09-16 09:09:54,814 - root - INFO - Using xformers attention in VAE 2024-09-16 09:10:02,103 - comfyui_segment_anything - WARNING - using extra model: H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\models\sams\sam_vit_h_4b8939.pth 2024-09-16 09:10:24,906 - dinov2 - INFO - using MLP layer as FFN 2024-09-16 09:10:36,135 - root - INFO - Requested to load AutoencodingEngine 2024-09-16 09:10:36,135 - root - INFO - Loading 1 new model 2024-09-16 09:10:36,228 - root - INFO - loaded completely 0.0 319.7467155456543 True 2024-09-16 09:10:37,421 - root - WARNING - clip missing: ['textprojection.weight'] 2024-09-16 09:10:42,861 - root - INFO - Requested to load FluxClipModel 2024-09-16 09:10:42,861 - root - INFO - Loading 1 new model 2024-09-16 09:10:43,883 - root - INFO - loaded completely 0.0 4777.53759765625 True 2024-09-16 09:10:44,869 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.float16 2024-09-16 09:10:44,871 - root - INFO - model_type FLUX 2024-09-16 09:10:58,873 - root - INFO - Requested to load Flux 2024-09-16 09:10:58,873 - root - INFO - Loading 1 new model 2024-09-16 09:11:02,282 - root - INFO - loaded partially 10140.6408203125 10140.46875 0 2024-09-16 09:11:02,736 - root - INFO - loaded completely 11545.95771484375 11350.048889160156 True 2024-09-16 09:11:03,894 - root - ERROR - !!! Exception during processing !!! No operator found for memory_efficient_attention_forward with inputs: query : shape=(24, 4495, 1, 128) (torch.bfloat16) key : shape=(24, 4495, 1, 128) (torch.bfloat16) value : shape=(24, 4495, 1, 128) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.7 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 128 2024-09-16 09:11:03,899 - root - ERROR - Traceback (most recent call last): File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\nodes.py", line 458, in sampling x = denoise_controlnet( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\sampling.py", line 300, in denoise_controlnet pred = model_forward( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\sampling.py", line 51, in model_forward img, txt = block(img=img, txt=txt, vec=vec, pe=pe) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 164, in forward attn = attention(torch.cat((txt_q, img_q), dim=2), File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\flux\math.py", line 11, in attention x = optimized_attention(q, k, v, heads, skip_reshape=True) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\modules\attention.py", line 380, in attention_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha__init.py", line 276, in memory_efficient_attention return _memory_efficient_attention( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha__init__.py", line 395, in _memory_efficient_attention return _memory_efficient_attention_forward( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\init__.py", line 414, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw return _run_priority_list( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(24, 4495, 1, 128) (torch.bfloat16) key : shape=(24, 4495, 1, 128) (torch.bfloat16) value : shape=(24, 4495, 1, 128) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.7 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 128

2024-09-16 09:11:03,900 - root - INFO - Prompt executed in 88.45 seconds 2024-09-16 09:23:45,653 - root - INFO - got prompt 2024-09-16 09:23:55,073 - root - INFO - Requested to load FluxClipModel_ 2024-09-16 09:23:55,074 - root - INFO - Loading 1 new model 2024-09-16 09:23:57,407 - root - INFO - loaded completely 0.0 4777.53759765625 True 2024-09-16 09:23:59,585 - root - INFO - loaded partially 11310.637463378906 11310.278381347656 0 2024-09-16 09:23:59,631 - root - ERROR - !!! Exception during processing !!! No operator found for memory_efficient_attention_forward with inputs: query : shape=(24, 4456, 1, 128) (torch.bfloat16) key : shape=(24, 4456, 1, 128) (torch.bfloat16) value : shape=(24, 4456, 1, 128) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.7 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 128 2024-09-16 09:23:59,632 - root - ERROR - Traceback (most recent call last): File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\nodes.py", line 411, in sampling x = denoise( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\sampling.py", line 193, in denoise pred = model_forward( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\sampling.py", line 51, in model_forward img, txt = block(img=img, txt=txt, vec=vec, pe=pe) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 164, in forward attn = attention(torch.cat((txt_q, img_q), dim=2), File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\flux\math.py", line 11, in attention x = optimized_attention(q, k, v, heads, skip_reshape=True) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\modules\attention.py", line 380, in attention_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha__init.py", line 276, in memory_efficient_attention return _memory_efficient_attention( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha__init__.py", line 395, in _memory_efficient_attention return _memory_efficient_attention_forward( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\init__.py", line 414, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw return _run_priority_list( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(24, 4456, 1, 128) (torch.bfloat16) key : shape=(24, 4456, 1, 128) (torch.bfloat16) value : shape=(24, 4456, 1, 128) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.7 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 128

2024-09-16 09:23:59,634 - root - INFO - Prompt executed in 13.91 seconds 2024-09-16 09:25:29,372 - root - INFO - got prompt 2024-09-16 09:25:30,201 - root - WARNING - clip missing: ['textprojection.weight'] 2024-09-16 09:25:39,832 - root - INFO - Requested to load FluxClipModel 2024-09-16 09:25:39,833 - root - INFO - Loading 1 new model 2024-09-16 09:25:47,159 - root - INFO - loaded completely 0.0 9319.23095703125 True 2024-09-16 09:25:49,801 - root - INFO - loaded completely 11425.05244140625 11350.048889160156 True 2024-09-16 09:25:49,836 - root - ERROR - !!! Exception during processing !!! No operator found for memory_efficient_attention_forward with inputs: query : shape=(24, 4352, 1, 128) (torch.bfloat16) key : shape=(24, 4352, 1, 128) (torch.bfloat16) value : shape=(24, 4352, 1, 128) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.7 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 128 2024-09-16 09:25:49,837 - root - ERROR - Traceback (most recent call last): File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\nodes.py", line 411, in sampling x = denoise( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\sampling.py", line 193, in denoise pred = model_forward( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui\sampling.py", line 51, in model_forward img, txt = block(img=img, txt=txt, vec=vec, pe=pe) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 164, in forward attn = attention(torch.cat((txt_q, img_q), dim=2), File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\flux\math.py", line 11, in attention x = optimized_attention(q, k, v, heads, skip_reshape=True) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\comfy\ldm\modules\attention.py", line 380, in attention_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=mask) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha__init.py", line 276, in memory_efficient_attention return _memory_efficient_attention( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha__init__.py", line 395, in _memory_efficient_attention return _memory_efficient_attention_forward( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\init__.py", line 414, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw return _run_priority_list( File "H:\StableDiffusion\stablediffusion\ComfyUI-aki-v1.4\python\lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(24, 4352, 1, 128) (torch.bfloat16) key : shape=(24, 4352, 1, 128) (torch.bfloat16) value : shape=(24, 4352, 1, 128) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs flshattF@v2.5.7 is not supported because: requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) bf16 is only supported on A100+ GPUs cutlassF is not supported because: bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 128

2024-09-16 09:25:49,838 - root - INFO - Prompt executed in 20.46 seconds

## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":22,"last_link_id":21,"nodes":[{"id":11,"type":"UNETLoader","pos":{"0":271.18927001953125,"1":869.0516357421875},"size":{"0":315,"1":82},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[10],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["FLUX1\flux1-dev-fp8.safetensors","fp8_e4m3fn"],"color":"#571a1a","bgcolor":"#6b2e2e"},{"id":12,"type":"VAELoader","pos":{"0":-188.81076049804688,"1":1349.0517578125},"size":{"0":315,"1":58},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[17],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["FLUX1\ae.sft"],"color":"#571a1a","bgcolor":"#6b2e2e"},{"id":21,"type":"Fast Groups Muter (rgthree)","pos":{"0":-826,"1":232},"size":{"0":210,"1":82},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"OPT_CONNECTION","type":"*","links":null,"label":"可选连接"}],"properties":{"matchColors":"","matchTitle":"","showNav":true,"sort":"position","customSortAlphabet":"","toggleRestriction":"default"}},{"id":13,"type":"XlabsSampler","pos":{"0":681.1890869140625,"1":959.0517578125},"size":{"0":342.5999755859375,"1":494},"flags":{},"order":8,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":10,"label":"模型"},{"name":"conditioning","type":"CONDITIONING","link":13,"label":"正面条件"},{"name":"neg_conditioning","type":"CONDITIONING","link":14,"label":"负面条件"},{"name":"latent_image","type":"LATENT","link":15,"label":"Latent"},{"name":"controlnet_condition","type":"ControlNetCondition","link":null,"label":"ControlNet条件"}],"outputs":[{"name":"latent","type":"LATENT","links":[16],"slot_index":0,"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"XlabsSampler"},"widgets_values":[227528323854023,"randomize",20,1,3,0,1],"color":"#57571a","bgcolor":"#6b6b2e"},{"id":14,"type":"CLIPTextEncodeFlux","pos":{"0":291.18927001953125,"1":1049.0517578125},"size":{"0":281.8453674316406,"1":160},"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":11,"label":"CLIP"},{"name":"t5xxl","type":"STRING","link":20,"widget":{"name":"t5xxl"},"label":"T5XXL"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[13],"slot_index":0,"shape":3,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["","",4,true,true],"color":"#572e1a","bgcolor":"#6b422e"},{"id":15,"type":"CLIPTextEncodeFlux","pos":{"0":351.18927001953125,"1":1309.0517578125},"size":{"0":281.8453674316406,"1":160},"flags":{"collapsed":true},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":12,"label":"CLIP"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[14],"slot_index":0,"shape":3,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["","",4,true,true],"color":"#572e1a","bgcolor":"#6b422e"},{"id":10,"type":"DualCLIPLoader","pos":{"0":-180.81076049804688,"1":1150.0517578125},"size":{"0":315,"1":106},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[11,12],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["FLUX1\clip_l.safetensors","FLUX1\t5xxl_fp16.safetensors","flux"],"color":"#571a1a","bgcolor":"#6b2e2e"},{"id":17,"type":"VAEDecode","pos":{"0":1131.189208984375,"1":1039.0517578125},"size":{"0":210,"1":46},"flags":{},"order":9,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":16,"label":"Latent"},{"name":"vae","type":"VAE","link":17,"label":"VAE"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[21],"slot_index":0,"shape":3,"label":"图像"}],"properties":{"Node name for S&R":"VAEDecode"},"color":"#2e571a","bgcolor":"#426b2e"},{"id":22,"type":"SaveImage","pos":{"0":1354.189208984375,"1":1027.0517578125},"size":{"0":315,"1":270},"flags":{},"order":10,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":21,"label":"图像"}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"],"color":"#1a5757","bgcolor":"#2e6b6b"},{"id":20,"type":"DF_Text_Box","pos":{"0":-288.8106994628906,"1":879.0516357421875},"size":{"0":400,"1":200},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"STRING","type":"STRING","links":[20],"slot_index":0,"shape":3,"label":"STRING"}],"properties":{"Node name for S&R":"DF_Text_Box"},"widgets_values":["Realistic photo of a woman in a leather jacket sitting on a motorcycle, leaning slightly forward, with her right foot on the ground and her left foot on the motorcycle's footrest, facing the camera, with the motorcycle's wheels in contact with the road, HD quality, natural look, high contrast, surreal and vast landscape",true]},{"id":16,"type":"EmptyLatentImage","pos":{"0":232.1892547607422,"1":1376.0517578125},"size":{"0":315,"1":106},"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[15],"shape":3,"label":"Latent"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,1],"color":"#1a572e","bgcolor":"#2e6b42"}],"links":[[10,11,0,13,0,"MODEL"],[11,10,0,14,0,"CLIP"],[12,10,0,15,0,"CLIP"],[13,14,0,13,1,"CONDITIONING"],[14,15,0,13,2,"CONDITIONING"],[15,16,0,13,3,"LATENT"],[16,13,0,17,0,"LATENT"],[17,12,0,17,1,"VAE"],[20,20,0,14,1,"STRING"],[21,17,0,22,0,"IMAGE"]],"groups":[{"title":"Flux1","bounding":[-299,795,2034,713],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"workspace_info":{"id":"xrIF_h49Sbs-yPwjnwoeN","saveLock":false,"cloudID":null,"coverMediaPath":null},"ds":{"scale":0.7972024500000015,"offset":[511.75449464186767,-752.7824250177842]}},"version":0.4}



## Additional Context
(Please add any additional context or steps to reproduce the error here)

附件附上我的简单工作流
[workflow (4).json](https://github.com/user-attachments/files/17008047/workflow.4.json)
Xyolan commented 3 days ago

不用问了,查看了源码,因为2080ti是不支持bf16的,插件虽然做了判断fallback到float16,但是很多地方实际实现还是有了bf16,我自己尝试修改强制到float16,现在虽然不报错,执行速度奇慢无比,大约1小时才结束进程,但是得到的是噪点画面。

坐等官方插件更新,把对老显卡的支持加上。

Xyolan commented 3 days ago

20和10系老显卡,暂时不要考虑使用这个插件了

dming519 commented 3 days ago

好的,感谢,我已经放弃了这个插件的使用,通过另外一种方式实现了 https://comfyanonymous.github.io/ComfyUI_examples/flux/#simple-to-use-fp8-checkpoint-version