Open jepjoo opened 3 weeks ago
After your updates from today, I can generate a clip but it's just black. I get black output with ComfyUI nodes aswell, so I'm wondering whether this quant is even working properly or just broken.
There was a (now deleted) FP8 scaled quant of Flux by comfyanonymous which had similar issue, so maybe the quanting process or Comfy code isn't quite there yet.
Feel free to close if this is clearly not an issue with your code.
After your updates from today, I can generate a clip but it's just black. I get black output with ComfyUI nodes aswell, so I'm wondering whether this quant is even working properly or just broken.
There was a (now deleted) FP8 scaled quant of Flux by comfyanonymous which had similar issue, so maybe the quanting process or Comfy code isn't quite there yet.
Feel free to close if this is clearly not an issue with your code.
It works with Comfy native nodes with fp8 fast at least, it's a lot faster (when uncompiled) but the quality is a lot worse than running fp8 with the wrapper. In fact even bf16 is slightly worse quality with the native implementation currently, although a lot faster as well.
Hiya,
Comfy-Org put out an FP8 scaled version of Mochi. Curious to try what kind of quality can be gotten out of it, but it doesn't seem compatible with this repo.
https://huggingface.co/Comfy-Org/mochi_preview_repackaged/blob/main/split_files/diffusion_models/mochi_preview_fp8_scaled.safetensors
Error I get:
!!! Exception during processing !!! 'pos_frequencies' Traceback (most recent call last): File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MochiWrapper\nodes.py", line 190, in loadmodel model = T2VSynthMochiModel( ^^^^^^^^^^^^^^^^^^^ File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MochiWrapper\mochi_preview\t2v_synth_mochi.py", line 182, in init set_module_tensor_to_device(model, name, dtype=torch.bfloat16, device=self.device, value=dit_sd[name])