Closed adjourn closed 7 years ago
why do you worry about the size? you're not going to build and upload it manually are you?
@tuananh
Not sure if I understand your question but here's how it usually works in "serverless" (e.g AWS Lambda):
Code is ran in isolated container. If container has been dropped (haven't been triggered for several minutes, e.g by API call from actual user), it has to reinitialize container, download, unzip, parse & run the code again. It's called cold start & can take second(s) if zip file is too big.
Less code == faster start == better user experience == cheaper because it's usually priced per 100ms.
And yes, Im currently building & uploading the code manually to serverless service provider but it's only because Im testing & trying things out. I would probably have an automated dev pipeline in production.
300KB difference isn't much compare with your code (500KB) + container base size (few MB at least, assuming it's alpine-based).
If container has been dropped
In this case, the response is slow anyway. And i don't think lambda (for example) will download, unzip every single time the func execute
@tuananh
No, it only downloads & unzips on cold start & you don't get too many cold starts if traffic is medium-high.
You're probably right about no huge time differences but for example, https://github.com/luin/ioredis/pull/494 - such a small change & already cuts ioredis
size in bundle ~25-50% (~25% smaller if only uglified, ~50% smaller if also zipped).
Fair enough.
Btw, looks like @luin hears you :) https://github.com/luin/ioredis/pull/494
Nice! Package codebase itself is relatively small & author(s), maintainer(s) probably know the best, if & which dependencies are crucial + there's definitely some extra code to support older versions of Node, not going to protest against that.
Anyway, I feel a lot better using this package now that #494 is merged, Im going to close this issue. It doesn't look like it's something people are much worried about anyway.
Now that serverless computing is gaining popularity & size matters more than ever, should we reevaluate this client from size & modularity perspective (e.g lodash)? I was trying to browse dependencies, files to figure it out but this codebase is too alien for me to evaluate it accurately. It doesn't seem to concern many, I didn't find any discussion about this, except https://github.com/luin/ioredis/issues/286 & https://github.com/luin/ioredis/pull/494.
My tests with Rollup.js & example backend (bundled to single flat file including all dependencies):
Normal
redis
client: ~798Kbioredis
client: ~1591KbUglified
redis
client: ~347Kbioredis
client: ~544KbUglified + zipped
redis
client: ~100Kbioredis
client: ~160KbI can easily live with 160Kb but it's ~2/3 the size of whole backend code + all other dependencies. This example backend isn't some simple REST proxy backend, it handles GraphQL queries, JWT tokens, password hashing, business logic & everything else a typical backend does. 2/3 seems a bit harsh, don't you think?
I can totally understand if this package targets "traditional" backends & size is not a concern, just let us know. If so, it's a bit shame because there are only 2 good (popular might be a better word) Node.js Redis clients & from these 2 only this one supports clusters.