Nuked88 / ComfyUI-N-Nodes

A suite of custom nodes for ConfyUI that includes GPT text-prompt generation, LoadVideo, SaveVideo, LoadFramesFromFolder and FrameInterpolator
MIT License
206 stars 22 forks source link

How to load 1min 30FPS video? Always out of RAM. #35

Closed tommyZZM closed 9 months ago

tommyZZM commented 9 months ago

I ran out of memory when loading a larger video, and even though I had 64GB of physical RAM, I still couldn't handle a 1-minute video with 30FPS.

I noticed that I have successfully kept the number of frames of the video at 1800 images, but it seems that it will cause memory overflow during the encode function.

Is there a way to process these images one after another instead of loading them into memory at the same time?

How should this scenario be handled or are there any alternatives?


Info:

https://github.com/Nuked88/ComfyUI-N-Nodes/blob/29b2e43baba81ee556b2930b0ca0a9c978c47083/py/video_node.py#L509

            # Recupera i risultati
            for future in futures:
                batch_i_tensors = future.result()
                i_list.extend(batch_i_tensors)

        i_tensor = torch.stack(i_list, dim=0) // <-------- The error may stops at this line, unable to allocate enough memory

        if images_limit != 0 or starting_frame != 0:

            b_size=final_count_frame
        else:
            b_size=len(FRAMES)

May related

https://www.reddit.com/r/comfyui/comments/17afmmh/how_to_load_frames_from_a_video_one_by_one/

I have a video and I want to run SD on each frame of that video. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. This causes my steps to take up a lot of RAM, leading to killed RAM. Is there a way to load each image in a video (or a batch) to save memory? My computer configuration: CPU: 12th Gen Intel(R) Core(TM) i5-12400F RAM: 16Gb GPU: NVIDIA GeForce RTX 3060 (12Gb Ram)


ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "H:\WorkShopAI\_COMMUNITY\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "H:\WorkShopAI\_COMMUNITY\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "H:\WorkShopAI\_COMMUNITY\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "H:\WorkShopAI\_COMMUNITY\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\video_node_advanced.py", line 512, in encode
    i_tensor = torch.stack(i_list, dim=0)
RuntimeError: [enforce fail at alloc_cpu.cpp:114] data. DefaultCPUAllocator: not enough memory: you tried to allocate 44764876800 bytes.
tommyZZM commented 9 months ago

I think what i need is some thing like this

image

with ComfyUI-VideoHelperSuite

https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite

Nuked88 commented 9 months ago

but you already have something like that...is integrated in VideoLoader! You just need to change the batch_size value 🤣