Open caoandong opened 2 months ago
it requires more than 40GB for 2 seconds of 720p video in my early experiments, 3 seconds video needs ~71 GB Vram without upscaling (upscale = 1)
Another question is "can we use something like flash attention to reduce vram usage"?
Hi, thank you so much for open sourcing this amazing work!
I'm wondering what's the memory requirement to run the inference script? I tested the script verbatim on an A100 40G machine and it went OOM. Curious if we need to use a 80G machine instead, or is there something obvious that I'm missing?
Thanks!
At this time, A100 80G machine is required for high-resolution (~2k) and high-frame-rate (fps>=24) video generation. You can decline "up_scale" or "target_fps" to avoid OOM, but the visual results will observe obvious drop.
it requires more than 40GB for 2 seconds of 720p video in my early experiments, 3 seconds video needs ~71 GB Vram without upscaling (upscale = 1)
Another question is "can we use something like flash attention to reduce vram usage"?
You are correct, this algorithm is expensive. Actually, we have already incorporated xformers for attention computation. You can modify the number of sampling steps to achieve faster inference, but the performance will drop undoubtedly. We will design more efficient sampling strategies in the future.
2 秒的 720p 视频用了多长时间推理? @JC1DA
Hi, thank you so much for open sourcing this amazing work!
I'm wondering what's the memory requirement to run the inference script? I tested the script verbatim on an A100 40G machine and it went OOM. Curious if we need to use a 80G machine instead, or is there something obvious that I'm missing?
Thanks!