kijai / ComfyUI-KwaiKolorsWrapper

Diffusers wrapper to run Kwai-Kolors model
Apache License 2.0
526 stars 26 forks source link

发生Error occurred when executing DownloadAndLoadChatGLM3:Torch not compiled with CUDA enabled错误,求解决 #32

Open BannyLon opened 1 month ago

BannyLon commented 1 month ago

我的电脑是MAC M2,在运行ComfyUI-KwaiKolorsWrapper插件时: 1、fp16 - 12 GB爆显存; 2、quant8 - 8-9 GB、quant4 - 4-5 GB时出现如下错误: 1721827480261 !!! Exception during processing!!! Torch not compiled with CUDA enabled Traceback (most recent call last): File "/Users/habhy/Sites/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/habhy/Sites/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/habhy/Sites/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 188, in loadmodel text_encoder.quantize(4) File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/modeling_chatglm.py", line 852, in quantize quantize(self.encoder, weight_bit_width) File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/quantization.py", line 157, in quantize weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_device()), ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 778, in current_device _lazy_init() File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

BannyLon commented 1 month ago

fp16 可以使用了,只是出一张1024的图需要6分20秒,使用quant8 - 8-9 GB、quant4 - 4-5 GB还是出现AssertionError: Torch not compiled with CUDA enabled报错。

另外想询问下这个插件是不是不能用最新的kolors的ipadapter啊?????

foggyghost0 commented 1 month ago

I also have the same error