kijai / ComfyUI-MimicMotionWrapper

Apache License 2.0
337 stars 27 forks source link

Hi there, i have follow all the instructions carefully but this error has accord , please help 😓 #80

Open MiladiCode opened 1 month ago

MiladiCode commented 1 month ago

ComfyUI Error Report

Error Details

## System Information
- **ComfyUI Version:** v0.2.3-9-g0dbba9f
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.4.1+cu124
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 2050 : cudaMallocAsync
  - **Type:** cuda
  - **VRAM Total:** 4294443008
  - **VRAM Free:** 142640482
  - **Torch VRAM Total:** 3321888768
  - **Torch VRAM Free:** 43602478

## Logs

2024-10-16 07:20:51,581 - root - INFO - Total VRAM 4096 MB, total RAM 16108 MB 2024-10-16 07:20:51,582 - root - INFO - pytorch version: 2.4.1+cu124 2024-10-16 07:20:52,898 - xformers - WARNING - WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.3.1+cu121 with CUDA 1201 (you have 2.4.1+cu124) Python 3.11.9 (you have 3.11.9) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details 2024-10-16 07:20:55,732 - root - INFO - xformers version: 0.0.27 2024-10-16 07:20:55,733 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-16 07:20:55,734 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 2050 : cudaMallocAsync 2024-10-16 07:20:56,119 - root - INFO - Using pytorch cross attention 2024-10-16 07:20:58,624 - root - INFO - [Prompt Server] web root: A:\ComfyUI_windows_portable\ComfyUI\web 2024-10-16 07:21:02,312 - root - INFO - Total VRAM 4096 MB, total RAM 16108 MB 2024-10-16 07:21:02,313 - root - INFO - pytorch version: 2.4.1+cu124 2024-10-16 07:21:02,314 - root - INFO - xformers version: 0.0.27 2024-10-16 07:21:02,315 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-16 07:21:02,316 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 2050 : cudaMallocAsync 2024-10-16 07:21:10,434 - root - INFO - Import times for custom nodes: 2024-10-16 07:21:10,434 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py 2024-10-16 07:21:10,434 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-use-everywhere 2024-10-16 07:21:10,435 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus 2024-10-16 07:21:10,435 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\mikey_nodes 2024-10-16 07:21:10,435 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-lama-remover 2024-10-16 07:21:10,435 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite 2024-10-16 07:21:10,436 - root - INFO - 0.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 2024-10-16 07:21:10,437 - root - INFO - 0.1 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved 2024-10-16 07:21:10,437 - root - INFO - 0.1 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux 2024-10-16 07:21:10,437 - root - INFO - 0.1 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes 2024-10-16 07:21:10,437 - root - INFO - 0.1 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes 2024-10-16 07:21:10,437 - root - INFO - 0.2 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\facerestore_cf 2024-10-16 07:21:10,438 - root - INFO - 0.2 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-segment-anything-2 2024-10-16 07:21:10,438 - root - INFO - 0.3 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_segment_anything 2024-10-16 07:21:10,438 - root - INFO - 0.5 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-tensorops 2024-10-16 07:21:10,438 - root - INFO - 0.5 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ControlNeXt-SVD 2024-10-16 07:21:10,438 - root - INFO - 0.6 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager 2024-10-16 07:21:10,439 - root - INFO - 0.8 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-UniAnimate-W 2024-10-16 07:21:10,439 - root - INFO - 1.0 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes 2024-10-16 07:21:10,439 - root - INFO - 1.3 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox 2024-10-16 07:21:10,439 - root - INFO - 1.6 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 2024-10-16 07:21:10,439 - root - INFO - 2.7 seconds: A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-reactor-node 2024-10-16 07:21:10,439 - root - INFO - 2024-10-16 07:21:10,454 - root - INFO - Starting server

2024-10-16 07:21:10,455 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-10-16 07:21:20,483 - root - INFO - got prompt 2024-10-16 07:21:22,275 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-16 07:21:22,285 - root - INFO - model_type V_PREDICTION_EDM 2024-10-16 07:21:25,112 - root - INFO - Using pytorch attention in VAE 2024-10-16 07:21:25,160 - root - INFO - Using pytorch attention in VAE 2024-10-16 07:22:12,046 - root - INFO - Requested to load CLIPVisionModelProjection 2024-10-16 07:22:12,046 - root - INFO - Loading 1 new model 2024-10-16 07:22:14,217 - root - INFO - loaded completely 0.0 1208.09814453125 True 2024-10-16 07:22:14,876 - root - INFO - Requested to load AutoencodingEngine 2024-10-16 07:22:14,876 - root - INFO - Loading 1 new model 2024-10-16 07:22:15,297 - root - INFO - loaded completely 0.0 186.42957878112793 True 2024-10-16 07:22:17,101 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-16 07:22:17,112 - root - INFO - model_type V_PREDICTION_EDM 2024-10-16 07:22:23,611 - root - INFO - Requested to load SVD_img2vid 2024-10-16 07:22:23,611 - root - INFO - Loading 1 new model 2024-10-16 07:22:25,063 - root - INFO - loaded partially 1293.489553833008 1293.489387512207 0 2024-10-16 07:22:26,687 - root - ERROR - !!! Exception during processing !!! Allocation on device 2024-10-16 07:22:26,722 - root - ERROR - Traceback (most recent call last): File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1437, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1404, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, *kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 526, in motion_sample latents = orig_comfy_sample(model, noise, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample return orig_comfy_sample(model, *args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample return orig_comfy_sample(model, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler denoised = model(x, sigma_hat * s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 682, in call return self.predict_noise(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 635, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_condsbatch(model, [cond, uncond], x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 887, in sliding_calc_conds_batch sub_conds_out = calc_conds_batch_wrapper(model, sub_conds, sub_x, sub_timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 993, in calc_conds_batch_wrapper return comfy.samplers.calc_cond_batch(model, conds, x_in, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 158, in forward_timestep_embed x = layer(x, context, time_context, num_video_frames, image_only_indicator, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 839, in forward x = block( ^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 631, in forward x = self.ff(self.norm3(x)) ^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 84, in forward return self.net(x) ^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward input = module(input) ^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 63, in forward x, gate = self.proj(x).chunk(2, dim=-1) ^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 76, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 72, in forward_comfy_cast_weights return torch.nn.functional.linear(input, weight, bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: Allocation on device

2024-10-16 07:22:26,735 - root - ERROR - Got an OOM, unloading all loaded models. 2024-10-16 07:22:27,438 - root - INFO - Prompt executed in 66.83 seconds 2024-10-16 07:26:29,948 - root - INFO - got prompt 2024-10-16 07:26:52,020 - root - INFO - Requested to load CLIPVisionModelProjection 2024-10-16 07:26:52,020 - root - INFO - Loading 1 new model 2024-10-16 07:26:53,179 - root - INFO - loaded completely 0.0 1208.09814453125 True 2024-10-16 07:26:53,514 - root - INFO - Requested to load AutoencodingEngine 2024-10-16 07:26:53,514 - root - INFO - Loading 1 new model 2024-10-16 07:26:53,653 - root - INFO - loaded completely 0.0 186.42957878112793 True 2024-10-16 07:26:54,079 - root - INFO - Requested to load SVD_img2vid 2024-10-16 07:26:54,079 - root - INFO - Loading 1 new model 2024-10-16 07:26:55,242 - root - INFO - loaded partially 1290.2895538330079 1290.288703918457 0 2024-10-16 08:16:35,563 - root - INFO - Requested to load AutoencodingEngine 2024-10-16 08:16:35,565 - root - INFO - Loading 1 new model 2024-10-16 08:16:36,307 - root - INFO - loaded completely 0.0 186.42957878112793 True 2024-10-16 08:17:16,953 - root - INFO - Prompt executed in 3046.87 seconds 2024-10-16 08:24:31,457 - root - INFO - got prompt 2024-10-16 08:24:55,498 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading model from: A:\ComfyUI_windows_portable\ComfyUI\models\mimicmotion\MimicMotionMergedUnet_1-1-fp16.safetensors 2024-10-16 08:24:55,501 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading UNET 2024-10-16 08:25:05,236 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading VAE 2024-10-16 08:25:06,304 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading IMAGE_ENCODER 2024-10-16 08:25:06,306 - root - ERROR - !!! Exception during processing !!! Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1. 2024-10-16 08:25:06,314 - root - ERROR - Traceback (most recent call last): File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper\nodes.py", line 134, in loadmodel self.image_encoder = CLIPVisionModelWithProjection.from_pretrained(svd_path, subfolder="image_encoder", variant="fp16", low_cpu_mem_usage=True).to(dtype).to(device).eval() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3558, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1.

2024-10-16 08:25:06,317 - root - INFO - Prompt executed in 34.74 seconds 2024-10-16 09:18:26,582 - root - INFO - got prompt 2024-10-16 09:18:26,940 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading model from: A:\ComfyUI_windows_portable\ComfyUI\models\mimicmotion\MimicMotionMergedUnet_1-1-fp16.safetensors 2024-10-16 09:18:26,963 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading UNET 2024-10-16 09:18:39,118 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading VAE 2024-10-16 09:18:39,120 - root - ERROR - !!! Exception during processing !!! Error no file named config.json found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1. 2024-10-16 09:18:39,134 - root - ERROR - Traceback (most recent call last): File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper\nodes.py", line 131, in loadmodel self.vae = AutoencoderKLTemporalDecoder.from_pretrained(svd_path, subfolder="vae", variant="fp16", low_cpu_mem_usage=True).to(dtype).to(device).eval() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 612, in from_pretrained config, unused_kwargs, commit_hash = cls.load_config( ^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 373, in load_config raise EnvironmentError( OSError: Error no file named config.json found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1.

2024-10-16 09:18:39,141 - root - INFO - Prompt executed in 12.33 seconds 2024-10-16 09:34:35,423 - root - INFO - got prompt 2024-10-16 09:34:35,712 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading model from: A:\ComfyUI_windows_portable\ComfyUI\models\mimicmotion\MimicMotionMergedUnet_1-1-fp16.safetensors 2024-10-16 09:34:35,714 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading UNET 2024-10-16 09:34:42,978 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading VAE 2024-10-16 09:34:43,842 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading IMAGE_ENCODER 2024-10-16 09:34:43,848 - root - ERROR - !!! Exception during processing !!! Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1. 2024-10-16 09:34:43,852 - root - ERROR - Traceback (most recent call last): File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper\nodes.py", line 134, in loadmodel self.image_encoder = CLIPVisionModelWithProjection.from_pretrained(svd_path, subfolder="image_encoder", variant="fp16", low_cpu_mem_usage=True).to(dtype).to(device).eval() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3558, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1.

2024-10-16 09:34:43,859 - root - INFO - Prompt executed in 8.20 seconds 2024-10-16 09:37:17,911 - root - INFO - got prompt 2024-10-16 09:37:18,164 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading model from: A:\ComfyUI_windows_portable\ComfyUI\models\mimicmotion\MimicMotionMergedUnet_1-1-fp16.safetensors 2024-10-16 09:37:18,165 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading UNET 2024-10-16 09:37:23,637 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading VAE 2024-10-16 09:37:24,407 - ComfyUI-MimicMotionWrapper.nodes - INFO - Loading IMAGE_ENCODER 2024-10-16 09:37:24,409 - root - ERROR - !!! Exception during processing !!! Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1. 2024-10-16 09:37:24,412 - root - ERROR - Traceback (most recent call last): File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "A:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MimicMotionWrapper\nodes.py", line 134, in loadmodel self.image_encoder = CLIPVisionModelWithProjection.from_pretrained(svd_path, subfolder="image_encoder", variant="fp16", low_cpu_mem_usage=True).to(dtype).to(device).eval() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "A:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3558, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.fp16.bin, model.fp16.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory A:\ComfyUI_windows_portable\ComfyUI\models\diffusers\stable-video-diffusion-img2vid-xt-1-1.

2024-10-16 09:37:24,415 - root - INFO - Prompt executed in 6.30 seconds

## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":58,"last_link_id":151,"nodes":[{"id":9,"type":"GetImageSizeAndCount","pos":{"0":825,"1":518},"size":{"0":210,"1":86},"flags":{},"order":6,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":112}],"outputs":[{"name":"image","type":"IMAGE","links":[148],"slot_index":0,"shape":3},{"name":"width","type":"INT","links":null,"shape":3},{"name":"height","type":"INT","links":null,"shape":3},{"name":"count","type":"INT","links":null,"shape":3}],"properties":{"Node name for S&R":"GetImageSizeAndCount"},"widgets_values":[]},{"id":42,"type":"MimicMotionGetPoses","pos":{"0":327,"1":702},"size":{"0":330,"1":126},"flags":{},"order":5,"mode":0,"inputs":[{"name":"ref_image","type":"IMAGE","link":110},{"name":"pose_images","type":"IMAGE","link":111}],"outputs":[{"name":"poses_with_ref","type":"IMAGE","links":[112,114],"slot_index":0,"shape":3},{"name":"pose_images","type":"IMAGE","links":[138],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"MimicMotionGetPoses"},"widgets_values":[true,true,true]},{"id":58,"type":"MimicMotionDecode","pos":{"0":1466,"1":396},"size":{"0":255.466796875,"1":78},"flags":{},"order":9,"mode":0,"inputs":[{"name":"mimic_pipeline","type":"MIMICPIPE","link":150,"slot_index":0},{"name":"samples","type":"LATENT","link":149}],"outputs":[{"name":"images","type":"IMAGE","links":[151],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"MimicMotionDecode"},"widgets_values":[4]},{"id":57,"type":"MimicMotionSampler","pos":{"0":1101,"1":419},"size":{"0":315,"1":430},"flags":{},"order":8,"mode":0,"inputs":[{"name":"mimic_pipeline","type":"MIMICPIPE","link":146},{"name":"ref_image","type":"IMAGE","link":147},{"name":"pose_images","type":"IMAGE","link":148},{"name":"optional_scheduler","type":"DIFFUSERS_SCHEDULER","link":null,"shape":7}],"outputs":[{"name":"samples","type":"LATENT","links":[149],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"MimicMotionSampler"},"widgets_values":[20,2,2,42,"fixed",15,0,16,6,false,1,0,1,1]},{"id":3,"type":"LoadImage","pos":{"0":-393,"1":311},"size":{"0":213.0849151611328,"1":410.70074462890625},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[61],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["Leonardo_Anime_XL_30_years_old_girl_3D_muddle_with_tattoo_stan_2.jpg","image"]},{"id":5,"type":"VHS_LoadVideo","pos":{"0":-402,"1":787},"size":[247.455078125,680.3645833333333],"flags":{},"order":1,"mode":0,"inputs":[{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[86],"slot_index":0,"shape":3},{"name":"frame_count","type":"INT","links":null,"shape":3},{"name":"audio","type":"AUDIO","links":null,"shape":3},{"name":"video_info","type":"VHS_VIDEOINFO","links":null,"shape":3}],"properties":{"Node name for S&R":"VHS_LoadVideo"},"widgets_values":{"video":"111.mp4","force_rate":0,"force_size":"Disabled","custom_width":512,"custom_height":512,"frame_load_cap":15,"skip_first_frames":0,"select_every_nth":2,"choose video to upload":"image","videopreview":{"hidden":false,"paused":false,"params":{"frame_load_cap":15,"skip_first_frames":0,"force_rate":0,"filename":"111.mp4","type":"input","format":"video/mp4","select_every_nth":2}}}},{"id":28,"type":"ImageResizeKJ","pos":{"0":-71,"1":481},"size":{"0":315,"1":242},"flags":{},"order":3,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":61},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":null,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":null,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[95,110,147],"slot_index":0,"shape":3},{"name":"width","type":"INT","links":[88],"slot_index":1,"shape":3},{"name":"height","type":"INT","links":[89],"slot_index":2,"shape":3}],"properties":{"Node name for S&R":"ImageResizeKJ"},"widgets_values":[424,848,"lanczos",true,64,0,0,"disabled"]},{"id":35,"type":"ImageResizeKJ","pos":{"0":-75,"1":781},"size":{"0":315,"1":242},"flags":{},"order":4,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":86},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":88,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":89,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[111,137],"slot_index":0,"shape":3},{"name":"width","type":"INT","links":null,"shape":3},{"name":"height","type":"INT","links":null,"shape":3}],"properties":{"Node name for S&R":"ImageResizeKJ"},"widgets_values":[424,848,"lanczos",false,64,0,0,"disabled"]},{"id":2,"type":"DownloadAndLoadMimicMotionModel","pos":{"0":658,"1":230},"size":{"0":404.8147277832031,"1":89.03937530517578},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"mimic_pipeline","type":"MIMICPIPE","links":[146,150],"shape":3}],"properties":{"Node name for S&R":"DownloadAndLoadMimicMotionModel"},"widgets_values":["MimicMotionMergedUnet_1-1-fp16.safetensors","fp16"]},{"id":16,"type":"VHS_VideoCombine","pos":{"0":1869,"1":160},"size":[2861.660400390625,310],"flags":{},"order":11,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":93},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null,"shape":3}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":12,"loop_count":0,"filename_prefix":"MimicMotion","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"pingpong":false,"save_output":false,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"MimicMotion_00002.mp4","subfolder":"","type":"temp","format":"video/h264-mp4","frame_rate":12}}}},{"id":17,"type":"ImageConcatMulti","pos":{"0":1644,"1":830},"size":{"0":210,"1":190},"flags":{},"order":10,"mode":0,"inputs":[{"name":"image_1","type":"IMAGE","link":95},{"name":"image_2","type":"IMAGE","link":137},{"name":"image_3","type":"IMAGE","link":138},{"name":"image_4","type":"IMAGE","link":151}],"outputs":[{"name":"images","type":"IMAGE","links":[93],"slot_index":0,"shape":3}],"properties":{},"widgets_values":[4,"right",false,null]},{"id":37,"type":"VHS_VideoCombine","pos":{"0":666,"1":897},"size":[440,1022.0000000000001],"flags":{},"order":7,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":114},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null,"shape":3}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":8,"loop_count":0,"filename_prefix":"MimicPose","format":"image/webp","pingpong":false,"save_output":false,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"MimicPose_00001.webp","subfolder":"","type":"temp","format":"image/webp","frame_rate":8}}}}],"links":[[61,3,0,28,0,"IMAGE"],[86,5,0,35,0,"IMAGE"],[88,28,1,35,2,"INT"],[89,28,2,35,3,"INT"],[93,17,0,16,0,"IMAGE"],[95,28,0,17,0,"IMAGE"],[110,28,0,42,0,"IMAGE"],[111,35,0,42,1,"IMAGE"],[112,42,0,9,0,"IMAGE"],[114,42,0,37,0,"IMAGE"],[137,35,0,17,1,"IMAGE"],[138,42,1,17,2,"IMAGE"],[146,2,0,57,0,"MIMICPIPE"],[147,28,0,57,1,"IMAGE"],[148,9,0,57,2,"IMAGE"],[149,57,0,58,1,"LATENT"],[150,2,0,58,0,"MIMICPIPE"],[151,58,0,17,3,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7513148009015777,"offset":[11.413255203546816,10.45266484179136]}},"version":0.4}



## Additional Context
(Please add any additional context or steps to reproduce the error here)