comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
56.54k stars 5.99k forks source link

Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. SD3.5 FP8 Mac M2 #5533

Open Creative-comfyUI opened 6 days ago

Creative-comfyUI commented 6 days ago

Expected Behavior

To render the image

Actual Behavior

no image it bugs

Steps to Reproduce

Using the ComfyUI workflow present in wiki page

Debug Logs

I can not put all the Debug log it is too long 

2024-11-07 23:08:51,027 - root - INFO - Total VRAM 16384 MB, total RAM 16384 MB
2024-11-07 23:08:51,027 - root - INFO - pytorch version: 2.4.1
2024-11-07 23:08:51,027 - root - INFO - Set vram state to: SHARED
2024-11-07 23:08:51,027 - root - INFO - Device: mps
2024-11-07 23:08:51,707 - root - INFO - Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-11-07 23:08:52,888 - root - INFO - [Prompt Server] web root: /Volumes/mac_disk/AI/ComfyUI/web
2024-11-07 23:08:53,559 - root - WARNING - Traceback (most recent call last):
  File "/Volumes/mac_disk/AI/ComfyUI/nodes.py", line 2012, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 990, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1127, in get_code
  File "<frozen importlib._bootstrap_external>", line 1185, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/Volumes/mac_disk/AI/ComfyUI/custom_nodes/clipseg/__init__.py'

2024-11-07 23:08:53,560 - root - WARNING - Cannot import /Volumes/mac_disk/AI/ComfyUI/custom_nodes/clipseg module for custom nodes: [Errno 2] No such file or directory: '/Volumes/mac_disk/AI/ComfyUI/custom_nodes/clipseg/__init__.py'
2024-11-07 23:08:55,756 - root - INFO - Total VRAM 16384 MB, total RAM 16384 MB
2024-11-07 23:08:55,756 - root - INFO - pytorch version: 2.4.1
2024-11-07 23:08:55,756 - root - INFO - Set vram state to: SHARED
2024-11-07 23:08:55,756 - root - INFO - Device: mps
2024-11-07 23:09:05,211 - root - INFO - 
Import times for custom nodes:

2024-11-07 23:09:05,218 - root - INFO - Starting server

2024-11-07 23:09:05,218 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-07 23:09:26,847 - root - INFO - got prompt
2024-11-07 23:09:27,368 - root - INFO - Using split attention in VAE
2024-11-07 23:09:27,369 - root - INFO - Using split attention in VAE
2024-11-07 23:09:33,750 - root - INFO - model weight dtype torch.bfloat16, manual cast: None
2024-11-07 23:09:33,754 - root - INFO - model_type FLOW
2024-11-07 23:09:40,701 - root - INFO - Requested to load FluxClipModel_
2024-11-07 23:09:40,702 - root - INFO - Loading 1 new model
2024-11-07 23:09:40,705 - root - INFO - loaded completely 0.0 323.94775390625 True
2024-11-07 23:09:42,747 - root - INFO - Requested to load FluxClipModel_
2024-11-07 23:09:42,747 - root - INFO - Loading 1 new model
2024-11-07 23:11:19,126 - root - INFO - Requested to load Flux
2024-11-07 23:11:19,127 - root - INFO - Loading 1 new model
2024-11-07 23:12:45,079 - root - INFO - loaded completely 0.0 7880.297119140625 True
2024-11-07 23:17:57,484 - root - INFO - got prompt
2024-11-07 23:20:42,637 - root - INFO - Processing interrupted
2024-11-07 23:20:42,647 - root - INFO - Prompt executed in 675.79 seconds
2024-11-07 23:22:15,177 - root - INFO - Unloading models for lowram load.
2024-11-07 23:22:43,277 - root - INFO - 1 models unloaded.
2024-11-07 23:22:43,401 - root - INFO - Loading 1 new model
2024-11-07 23:22:59,452 - root - INFO - loaded completely 0.0 7880.297119140625 True
2024-11-07 23:35:20,213 - root - INFO - Requested to load AutoencodingEngine
2024-11-07 23:35:20,225 - root - INFO - Loading 1 new model
2024-11-07 23:35:58,888 - root - INFO - loaded completely 0.0 319.7467155456543 True
2024-11-07 23:36:10,025 - root - INFO - Prompt executed in 924.68 seconds
2024-11-07 23:44:30,055 - root - INFO - got prompt
2024-11-07 23:45:55,950 - root - INFO - Unloading models for lowram load.
2024-11-07 23:45:56,125 - root - INFO - 1 models unloaded.
2024-11-07 23:45:56,125 - root - INFO - Loading 1 new model
2024-11-07 23:46:13,772 - root - INFO - loaded completely 0.0 7880.297119140625 True
2024-11-08 00:14:36,066 - root - INFO - Requested to load AutoencodingEngine
2024-11-08 00:14:36,070 - root - INFO - Loading 1 new model
2024-11-08 00:15:31,253 - root - INFO - loaded completely 0.0 319.7467155456543 True
2024-11-08 00:16:16,545 - root - INFO - Prompt executed in 1906.03 seconds
2024-11-08 00:20:47,717 - root - INFO - got prompt
2024-11-08 00:22:16,020 - root - INFO - Unloading models for lowram load.
2024-11-08 00:22:16,106 - root - INFO - 1 models unloaded.
2024-11-08 00:22:16,106 - root - INFO - Loading 1 new model
2024-11-08 00:22:38,298 - root - INFO - loaded completely 0.0 7880.297119140625 True
2024-11-08 00:22:53,124 - root - INFO - got prompt
2024-11-08 00:22:53,547 - root - ERROR - Failed to validate prompt for output 9:
2024-11-08 00:22:53,547 - root - ERROR - * CheckpointLoaderSimple 4:
2024-11-08 00:22:53,547 - root - ERROR -   - Value not in list: ckpt_name: 'sd3.5_large_fp8_scaled.safetensors' not in (list of length 82)
2024-11-08 00:22:53,547 - root - ERROR - Output will be ignored
2024-11-08 00:22:53,547 - root - WARNING - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-11-08 00:23:41,232 - root - INFO - got prompt
2024-11-08 00:45:14,511 - root - INFO - Requested to load AutoencodingEngine
2024-11-08 00:45:14,519 - root - INFO - Loading 1 new model
2024-11-08 00:46:03,350 - root - INFO - loaded completely 0.0 319.7467155456543 True
2024-11-08 00:46:40,530 - root - INFO - Prompt executed in 1552.44 seconds
2024-11-08 00:46:53,921 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-11-08 00:46:53,927 - root - INFO - model_type FLOW
2024-11-08 00:49:06,020 - root - INFO - Using split attention in VAE
2024-11-08 00:49:06,026 - root - INFO - Using split attention in VAE
2024-11-08 00:49:10,767 - root - INFO - Requested to load SD3ClipModel_
2024-11-08 00:49:10,767 - root - INFO - Loading 1 new model
2024-11-08 00:49:10,775 - root - INFO - loaded completely 0.0 6228.190093994141 True
2024-11-08 00:51:12,123 - root - INFO - Requested to load SD3ClipModel_
2024-11-08 00:51:12,123 - root - INFO - Loading 1 new model
2024-11-08 00:51:12,131 - root - INFO - loaded completely 0.0 6102.49609375 True
2024-11-08 00:51:20,350 - root - WARNING - clip missing: ['text_projection.weight']
2024-11-08 00:54:36,959 - root - INFO - Requested to load SD3
2024-11-08 00:54:37,017 - root - INFO - Loading 1 new model
2024-11-08 00:55:28,542 - root - INFO - loaded completely 0.0 7683.561706542969 True
2024-11-08 00:55:30,326 - root - ERROR - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2024-11-08 00:55:30,608 - root - ERROR - Traceback (most recent call last):
  File "/Volumes/mac_disk/AI/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

Other

No response

Djon253 commented 4 days ago

Why do I have a feeling that nobody wants to solve this problem? A million messages on the internet about this error and nobody solves anything(.

Creative-comfyUI commented 4 days ago

Why do I have a feeling that nobody wants to solve this problem? A million messages on the internet about this error and nobody solves anything(.

I have maybe an idea for some reason it seems on mac you can use only gguf model and not the native model for sd3.5 and flux.

igor-elbert commented 3 days ago

Same issue. Native Flux model. Will try with GGUF

Creative-comfyUI commented 3 days ago

Good news Now I found a way to play SD3.5 on mac without this problem Write this for starting the server PYTORCH_ENABLE_MPS_FALLBACK=1 python3 main.py

For sure it is slow very slow but it works