Closed nateraw closed 2 years ago
Yea it's definitely faster. On Colab Pro+'s V100 instance I'm seeing about a 20% speedup on a single batch of 4. this would make things a lot faster when generating a ton of frames, as we are doing here.
Other things to try:
solved by #45
We're doing 1 image at a time, but if you have the vram you'll probably get a nice speedup if you do inference batch-wise. In the case of
walk
, we might even want to do a dataloader, etc, etc since you'll have 100s of inputs and will have to break them up.Will hack some experiments around this and report back here results. If worth the effort, I'll do this