RobinMalfait / lazy-collections

Collection of fast and lazy operations
MIT License
406 stars 24 forks source link

Feature suggestion: parallel processing #4

Open bjuppa opened 3 years ago

bjuppa commented 3 years ago

This package could provide utilities for a bit of "eager" processing of async iterators.

A "buffer" of a specified size to process items of an iterator in parallel before resolved items are yielded enables:

New functions

This suggested style of functions could take one argument that describes the desired "parallelism" of the operation.

The argument could be a simple integer, or even better, a function that continuously supplies an adjusted "buffer" size to process in parallel before resolved items are yielded (good for backoff when an API is rate-limited, you probably want to switch down to a buffer size of 1 and then increase again slowly).

Suggested implementation

If race(5) is in the pipe, immediately call next() 5 times on the iterator and put them into a processing "buffer" (probably a map). When any resolves, remove it from the buffer, adjust the buffer size, fill up the buffer again using next() (unless the iterator reported being done), then yield the resolved item.

eager(5) would do the same thing, but keep resolved items until they can be yielded in the same order they arrived.

Background

This is an idea I've had since I did a similar thing in a project for my former employer last year. Had this package existed at that time it would have been a very elegant base for what I did!

I basically implemented map, flatten and tap for async iterators in similar ways to this package (I think those were the only basic operations I needed) then I added the race-style-parallelism on top.

The use case was reading a lot of paginated data from the Google Classroom APIs, in multiple steps. We wanted a list of a student's assignments, with course and teacher data for each assignment, something like this:

For a typical student that could entail up to a 100 API calls, and if done sequentially, it took a loooong time... By reading data from an async iterator in each step and "racing" 10-20 items from each of them, the next page of data was available just ahead of time, requests from all steps could be processed in parallel and it was all in nice readable chained collection operations. Ok, they would've been even more readable with the nice pipe-style API of this package 😄

Of course this wouldn't have been an issue at all if the Google Classroom API was GraphQL 🤷 and the API limits was a big thing as we really hammered it with short bursts!

bjuppa commented 3 years ago

hey, I just had an idea... perhaps the simplest version of this can be made by chunking and then flattening? Would that eager load the items in each chunk?

RobinMalfait commented 3 years ago

That's an interesting idea! I don't use this package that much anymore - I really should honestly. However, what do you think about this PR: https://github.com/RobinMalfait/lazy-collections/pull/5 ?

bjuppa commented 3 years ago

I'm glad you like the idea!

As it's been over a year since I worked on this myself (and I no longer have access to the old code I wrote) I'll need to dig in and wrap my head around async iterators again...

I'll sure have a look at your PR and get back to you!