kijai / ComfyUI-KwaiKolorsWrapper

Diffusers wrapper to run Kwai-Kolors model
Apache License 2.0
518 stars 26 forks source link

Error occurred when executing KolorsTextEncode: Torch not compiled with CUDA enabled #20

Closed foggyghost0 closed 1 month ago

foggyghost0 commented 1 month ago

Running on Mac M2 Max and getting this error. How could I fix this?

Full error code:

File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 299, in encode ).to('cuda') File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 800, in to self.data = {k: v.to(device=device) for k, v in self.data.items()} File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 800, in self.data = {k: v.to(device=device) for k, v in self.data.items()} File "/Users/xxx/Library/Application Support/StabilityMatrix/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")

kijai commented 1 month ago

I had missed that hardcoded cuda call. You can try again now, but I really don't know if it can work on MPS. Basically it should as it uses diffusers mostly, and I'm assuming they support MPS. The quantization however doesn't so you're stuck trying with the fp16 weights.

foggyghost0 commented 1 month ago

I had missed that hardcoded cuda call. You can try again now, but I really don't know if it can work on MPS. Basically it should as it uses diffusers mostly, and I'm assuming they support MPS. The quantization however doesn't so you're stuck trying with the fp16 weights.

It works now! Thank you so much for a fast fix and for your work!