kijai / ComfyUI-KwaiKolorsWrapper

Diffusers wrapper to run Kwai-Kolors model
Apache License 2.0
518 stars 26 forks source link

MacOS issue with the Load ChatGLM3 Model node #9

Closed jwooldridge234 closed 1 month ago

jwooldridge234 commented 1 month ago

Hi there!

Tried running this on MacOS, using the Load ChatGLM3 Model node and the 8bit and 4bit safetensors, and I get some interesting errors:

4bit model in fp16, q8, q4 (makes no difference):

Error occurred when executing LoadChatGLM3:

Trying to set a tensor of shape torch.Size([4096, 6848]) in "weight" (which has shape torch.Size([4096, 13696])), this look incorrect.

File "/Users/jackwooldridge/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 130, in loadmodel
set_module_tensor_to_device(text_encoder, key, device=offload_device, value=text_encoder_sd[key])
File "/Users/jackwooldridge/.pyenv/versions/3.12.4/lib/python3.12/site-packages/accelerate/utils/modeling.py", line 358, in set_module_tensor_to_device
raise ValueError(

8bit model in any quant:

Error occurred when executing LoadChatGLM3:

Linear(in_features=13696, out_features=4096, bias=False) does not have a parameter or a buffer named weight_scale.

File "/Users/jackwooldridge/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 130, in loadmodel
set_module_tensor_to_device(text_encoder, key, device=offload_device, value=text_encoder_sd[key])
File "/Users/jackwooldridge/.pyenv/versions/3.12.4/lib/python3.12/site-packages/accelerate/utils/modeling.py", line 331, in set_module_tensor_to_device
raise ValueError(f"{module} does not have a parameter or a buffer named {tensor_name}.")

I'm on the latest version of ComfyUI and I made sure to install requirements.txt after pulling your latest version. Breaks just using python main.py and no parameters.

kijai commented 1 month ago

Yeah my bad, I forgot to actually push the update that allows using those. Can you try now?

jwooldridge234 commented 1 month ago

Just did. Looks like it's reliant on CUDA for the node (I get this error):

Error occurred when executing LoadChatGLM3:

Torch not compiled with CUDA enabled

File "/Users/jackwooldridge/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 124, in loadmodel
text_encoder.quantize(8)
File "/Users/jackwooldridge/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/modeling_chatglm.py", line 852, in quantize
quantize(self.encoder, weight_bit_width)
File "/Users/jackwooldridge/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/quantization.py", line 157, in quantize
weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_device()),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jackwooldridge/.pyenv/versions/3.12.4/lib/python3.12/site-packages/torch/cuda/__init__.py", line 778, in current_device
_lazy_init()
File "/Users/jackwooldridge/.pyenv/versions/3.12.4/lib/python3.12/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")

MPS has limited support for quantization (though Huggingface's new Quanto package works pretty well) so I don't know if this is easily fixable on your end. I can look into options for Mac and maybe do a pull request.

kijai commented 1 month ago

Yeah looks like quantization doesn't support MPS.