Closed Gitterman69 closed 3 weeks ago
Thank you for your interest in our research!
In terms of hardware, I don’t have an idea to accelerate inference with a single GPU. You might try to load multiple pipelines on a single GPU using our torch multiprocessing code, but I don’t think it will dramatically reduce the time.
BTW, reducing the “video_length” (i.e., f in the paper) in the code from 16 to 12, 8, or sth will definitely save sampling time. However, it may decrease quality, and we have not tested these cases yet 😅
is there any way to increase speed (16 minutes for 100 frames on 4090 while utilising 100% gpu resources) without multi gpu usage!? its amazing we finally have a “long video” creation tool with good quality… any hints/tips/tricks would be highly appreciated.
also some ways to decrease quality in order to speed up things would be helpful
thanks so much