GoogleChromeLabs / comlink

Comlink makes WebWorkers enjoyable.
Apache License 2.0
11k stars 382 forks source link

Significant performance optimizations possible in `requestResponseMessage` #647

Open josephrocca opened 7 months ago

josephrocca commented 7 months ago

I'm using Comlink on a Deno server that is handling a lot of traffic - I'm just passing a string to the worker, and then the worker is sending back ArrayBuffers (I'm using Comlink.transfer for sending those, of course), and it turns out that Comlink has become a bottleneck on the main thread - specifically this code:

image

I'd have thought that V8 would have some magic optimizations to make the function creation here not as expensive as it seems to be. It looks like it's creating a "fresh" function each time, and then isn't able to optimize it since it only gets called once, and so simple stuff like !ev.data || !ev.data.id runs slow because it's basically running in "interpreted" mode. That's my guess here, anyway.

I'm wondering if you'd welcome a pull request which optimizes this, and if so, do you have any preferred approach? My thinking here is that you'd just have a single function (rather than creating a new one for every request), and it uses a Map which maps an id to its resolver.

Vitaminaq commented 7 months ago

+1 When I perform a lot of operations during a certain period of time, the performance degradation is very severe

The createProxy method is also applicable

qdhuxp commented 7 months ago

+1 I event frequently get OOM from this function of "resolve(ev.data);". I know my data is large, and I don't know how to avoid it through coding.

mhofman commented 7 months ago

I'm not related to this project, but most likely I would have a single listener per ep, and a Map of pending calls, associating the id to the resolver. That would ensure there is only a single event listener executed per received message.

benjamind commented 6 months ago

Hey folks, sorry for the slow replies here. Myself and @surma are not super active on this project (looking for maintainers if you're interested!).

Definitely open to a PR here to explore optimizations. Unless there's some specific reason we're not already taking this approach @surma ?