chaojie / ComfyUI-MotionCtrl-SVD

79 stars 5 forks source link

Model size and VRAM usage #7

Open rezponze opened 5 months ago

rezponze commented 5 months ago

Thanks for this node! The installation is a bit tricky but once running its very consistent.

I'm running the workflow with no problems on a 24GB 3090, but I get an OOM error on my 12GB 3060.

I tried converting the model to fp16 but I still get a OOM error.

Is there a way to run this on 12GB?

Appreciate your work, thanks in advance!

OOM log below

Error occurred when executing Load Motionctrl-SVD Checkpoint:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated     : 11.27 GiB
Requested               : 112.50 MiB
Device limit            : 12.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
                        : 17179869184.00 GiB

  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\ComfyUI\execution.py", line 155, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\ComfyUI\execution.py", line 85, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\ComfyUI\execution.py", line 78, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-MotionCtrl-SVD\nodes.py", line 152, in load_checkpoint
    model = build_model(config_path, ckpt_path, device, frame_length, steps)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-MotionCtrl-SVD\gradio_utils\motionctrl_cmcm_gradio.py", line 55, in build_model
    model, filter = load_model(
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-MotionCtrl-SVD\gradio_utils\motionctrl_cmcm_gradio.py", line 281, in load_model
    model = model.to(device).eval()
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  [Previous line repeated 4 more times]
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)
  File "E:\AI\Apps\StableDiffusionWebUI\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)