Does the library have the capability to run batched inference? i.e. being able to pass multiple images at the same time to take advantage of gpu parallelization and speed up inference on multiple images? I see that there are references to a "pipe" being created and was wondering if there can be any parallels drawn in this library to neural nets that are much more efficient running batched inference. The example seems to run images sequentially, I couldn't find examples of batched inference. Any leads or help is much appreciated, thanks!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Does the library have the capability to run batched inference? i.e. being able to pass multiple images at the same time to take advantage of gpu parallelization and speed up inference on multiple images? I see that there are references to a "pipe" being created and was wondering if there can be any parallels drawn in this library to neural nets that are much more efficient running batched inference. The example seems to run images sequentially, I couldn't find examples of batched inference. Any leads or help is much appreciated, thanks!