Currently, model creates a queue of tasks sent to it by client of pipeline, then handles them one at a time before sending back. If processing can be done fast enough that the queue never starts to accumulate many tasks, this should be fine. However, it is very likely there will be a case where the queue starts to fill up faster than model running on unbatched data can keep up with it. Need to collate date and allow model processing to be done in batched form.
Current Idea:
Before taking newest task from queue, check size of queue and if it would be worth batching it (i.e. maybe if the size of items in queue is greater than some predefined number)
Take many tasks off queue and collate them
After model output is obtained, undo this collation and send tasks back as normally
Collate and uncollate functions should be defined by user but be optional
Handle queued tasks should deal with collation and batched calls to process
Currently, model creates a queue of tasks sent to it by client of pipeline, then handles them one at a time before sending back. If processing can be done fast enough that the queue never starts to accumulate many tasks, this should be fine. However, it is very likely there will be a case where the queue starts to fill up faster than model running on unbatched data can keep up with it. Need to collate date and allow model processing to be done in batched form. Current Idea: