Open north151 opened 4 days ago
Hi @north151 ,
Thanks for your attention! Reducing the memory is acutally something we are working on currently. But unfortunately, we still haven't found any effective improvements yet.
We recommend you to use at least one A100 80GB GPU to run StableV2V
, under the setting of producing 16 video frames in the size of 512×512, which is the hardware environment that would surely work. To reduce the memory requirements, you may consider reducing the number of generated frames and resolution, if possible.
I haven't tried multi-GPU inference yet, but I would highly appreciate it and willing to try some potential modifications if you can offer some example codes (if there is any). Also, I will update the code if I've got any related solutions.
Best regards, Chang
Hello, I am using a 40G GPU, is the pixel of the image frame I input too large? Can support multi-GPUs inference