Picsart-AI-Research / StreamingT2V

StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
https://streamingt2v.github.io/
1.35k stars 141 forks source link

how to run inference on multi GPUs #57

Open GallonDeng opened 3 weeks ago

GallonDeng commented 3 weeks ago

how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?

rob-hen commented 3 weeks ago

HI @AllenDun, thank you for your interest in our project.

There is currently no multi GPU implementation. We are working on reducing the memory requirements.