Closed itswhateverman closed 2 months ago
the above replies are malware spam fyi
edit: they've been removed
Thanks for the report.
Try an update (git pull
in the cg-mixed-casting directory) and see if it works now.
After update, now errors on BFloat16:
I'm trying to load custom models such as jibMixFlux_v10. (fp8) I have this issue with the other custom models I have tried. I don't experience this using fp16 models which properly quantize.
Mixed Cast Flux Loader
Got unsupported ScalarType BFloat16
Exception Message: Got unsupported ScalarType BFloat16
File "X:\comfy2\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\comfy2\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\comfy2\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "X:\comfy2\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-mixed-casting\mixed_gguf_node.py", line 181, in func
sd = mixed_gguf_sd_loader(path, metadata=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-mixed-casting\mixed_gguf_node.py", line 133, in mixed_gguf_sd_loader
qt = quants.quantize(tnsr.to(torch.bfloat16).numpy(), qtype=qtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you go to the file mixed_gguf_node.py
and find line 133 (the one shown in the error), change bfloat16
to float
and it should work.
That change will be in the next push...
Should be fixed in the new version.
yep thanks
I assume the following is indicating we can't start with an fp8 model and need fp16? Could the node upcast first to a useable format? I know it may not be ideal but it may still have utility.
Error Details
Exception Message: Got unsupported ScalarType Float8_e4m3fn
Stack Trace