0xbitches / ComfyUI-LCM

Latent Consistency Model for ComfyUI
GNU General Public License v3.0
250 stars 16 forks source link

Fails on CPU with `RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'` #19

Open staticfloat opened 10 months ago

staticfloat commented 10 months ago

I'm trying to run on CPU using the comfy-cpu profile of https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage. I have installed this extension in comfy and everything appears to be functioning properly, however when I try to actually evaluate something, it appears that the use_fp16 toggle doesn't fully eliminate the usage of float16 data types:

Error occurred when executing LCM_Sampler:

"LayerNormKernelImpl" not implemented for 'Half'

File "/stable-diffusion/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/stable-diffusion/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/stable-diffusion/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/data/config/comfy/custom_nodes/ComfyUI-LCM/nodes.py", line 63, in sample
result = self.pipe(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/config/comfy/custom_nodes/ComfyUI-LCM/lcm/lcm_pipeline.py", line 195, in __call__
prompt_embeds = self._encode_prompt(
File "/data/config/comfy/custom_nodes/ComfyUI-LCM/lcm/lcm_pipeline.py", line 98, in _encode_prompt
prompt_embeds = self.text_encoder(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 800, in forward
return self.text_model(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 705, in forward
encoder_outputs = self.encoder(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 632, in forward
layer_outputs = encoder_layer(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 371, in forward
hidden_states = self.layer_norm1(hidden_states)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 190, in forward
return F.layer_norm(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)

I'm happy to run any debugging commands that might help track this down. Thank you for putting this together!

BinaryQuantumSoul commented 10 months ago

Same for me