Closed simon-liebehenschel closed 2 years ago
I do not want to create one more issue, so I will write here.
The chunks
implementation is x3 slower than another implementation. Is there any reason for this or this is just a bad design?
Hi @AIGeneratedUsername and thanks for the report,
What is the reason of the current implementation?
Good catch. The stream operators provided by aiostream
have quite an overhead although I didn't expect it to be that much. However, this test is not really representative of what aiostream
is typically used for since this example does not perform any IO operation.
Try replacing the generator with:
async def gen() -> AsyncIterator[int]:
for i in range(10_000):
await asyncio.sleep(0)
yield i
And you'll get something like:
values = await stream.list(gen()): 58.098731541000234
Running values = [_ async for _ in gen()] 49.413229781000155
Still, I might look into it when I have time.
or this is just a bad design?
Please don't call other people's work bad design. Feel free to open another issue if you want me to address it.
Running
values = await stream.list(gen())
timeit: 17,5 seconds Runningvalues = [_ async for _ in gen()]
timeit: 1.9 secondsCode:
What is the reason of the current implementation?
Why?