Open sd2530615 opened 1 week ago
I had the same issue, and it's most likely the same as https://github.com/kijai/ComfyUI-MochiWrapper/issues/61 There is a VAE decoder and an encoder. That you can get here : https://huggingface.co/Kijai/Mochi_preview_comfy/tree/main I had the decoder instead of the encoder.
Yeah initially there was only decoder available, so the models are separate. Comfy made a combined VAE with both decoder and encoder which wasn't supported by these nodes. It should now be, though it's very slightly less efficient to load it from the combined instead of the separate model.
I'll still keep the separated available and default as unless you're doing vid2vid you do not need the encoder at all. And I'll keep my VAE setup otherwise too as it allow for torch.compiling it for much faster decoding.
got an error when encoding frames
Prompt executed in 1.16 seconds got prompt !!! Exception during processing !!! 'layers.0.weight' Traceback (most recent call last): File "F:\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "F:\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "F:\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "F:\ComfyUI\custom_nodes\ComfyUI-MochiWrapper\nodes.py", line 388, in loadmodel set_module_tensor_to_device(encoder, name, dtype=dtype, device=offload_device, value=encoder_sd[name]) KeyError: 'layers.0.weight'
Prompt executed in 1.18 seconds
here are the workflow. wrapper_inversion_example.json