When trying to run the default workflow in Comfy, I am running into CUDA errors.
Is there any official MPS support on the way?
Here is the terminal's output when I try to run the default workflow:
> $ python main.py --force-fp16 --use-split-cross-attention
** ComfyUI startup time: 2024-02-23 16:01:06.190700
** Platform: Darwin
** Python version: 3.12.1 | packaged by Anaconda, Inc. | (main, Jan 19 2024, 09:45:58) [Clang 14.0.6 ]
** Python executable: /Users/user/miniconda3/envs/comfy-svd/bin/python
** Log path: /Users/user/repos/Comfy-mt/ComfyUI/comfyui.log
Prestartup times for custom nodes:
0.0 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-Manager
Total VRAM 65536 MB, total RAM 65536 MB
Forcing FP16.
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
Using split optimization for cross attention
### Loading: ComfyUI-Manager (V2.7.2)
### ComfyUI Revision: 2011 [10847dfa] | Released on '2024-02-23'
### Loading: ComfyUI-Impact-Pack (V4.80)
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
### Loading: ComfyUI-Impact-Pack (Subpack: V0.4)
[Impact Pack] Wildcards loading done.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
Using device cpu
------------------------------------------
Comfyroll Studio v1.76 : 175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------
Import times for custom nodes:
0.0 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/Derfuu_ComfyUI_ModdedNodes
0.0 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-Frame-Interpolation
0.0 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyMath
0.0 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
0.1 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-Manager
0.1 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
0.7 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD
6.4 seconds: /Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
Loading model from /Users/user/repos/Comfy-mt/ComfyUI/models/checkpoints/motionctrl_svd.ckpt
Using device cpu
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
Attention mode 'softmax-xformers' is not available. Falling back to native attention. This is not a problem in Pytorch >= 2.0. FYI, you are running with PyTorch version 2.3.0.dev20240222.
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/Users/user/repos/Comfy-mt/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD/nodes.py", line 152, in load_checkpoint
model = build_model(config_path, ckpt_path, device, frame_length, steps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD/gradio_utils/motionctrl_cmcm_gradio.py", line 55, in build_model
model, filter = load_model(
^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD/gradio_utils/motionctrl_cmcm_gradio.py", line 279, in load_model
model = instantiate_from_config(config.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/comfy/../custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/motionctrl/camera_motion_control.py", line 24, in __init__
super().__init__(*args, **kwargs)
File "/Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/models/diffusion.py", line 60, in __init__
self.conditioner = instantiate_from_config(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/comfy/../custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/modules/encoders/modules.py", line 79, in __init__
embedder = instantiate_from_config(embconfig)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/comfy/../custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/comfy/../custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/modules/encoders/modules.py", line 1038, in __init__
self.open_clip = instantiate_from_config(open_clip_embedding_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/comfy/../custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/repos/Comfy-mt/ComfyUI/comfy/../custom_nodes/ComfyUI-MotionCtrl-SVD/sgm/modules/encoders/modules.py", line 591, in __init__
model, _, _ = open_clip.create_model_and_transforms(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/open_clip/factory.py", line 384, in create_model_and_transforms
model = create_model(
^^^^^^^^^^^^^
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/open_clip/factory.py", line 276, in create_model
model.to(device=device)
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1170, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/torch/nn/modules/module.py", line 778, in _apply
module._apply(fn)
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/torch/nn/modules/module.py", line 778, in _apply
module._apply(fn)
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/torch/nn/modules/module.py", line 803, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1156, in convert
return t.to(
^^^^^
File "/Users/user/miniconda3/envs/comfy-svd/lib/python3.12/site-packages/torch/cuda/__init__.py", line 309, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Prompt executed in 9.38 seconds
Thanks for the model.
I have a Macbook M3 Max with 64 GB shared RAM.
When trying to run the default workflow in Comfy, I am running into CUDA errors. Is there any official MPS support on the way?
Here is the terminal's output when I try to run the default workflow: