Closed bcomnes closed 3 months ago
Just curious, the issue where this is being discussed in the ioredis repo mentions that they recommend migrating to the official client: https://github.com/redis/ioredis/issues/1870#issuecomment-2029351413.
Have we considered that, versus another community-maintained package which may end up having the same maintenance issues?
Just curious, the issue where this is being discussed in the ioredis repo mentions that they recommend migrating to the official client: https://github.com/redis/ioredis/issues/1870#issuecomment-2029351413.
Yeah honestly I had the same question in mind
It also applies to fastify-rate-limit
, currently it accepts an ioredis instance
I would go for the official one if it is a long term choice.
Using ioredis
was a decision on 2018 #19, something might change now.
I also notice redis
provide a benchmark script that we might want to try.
https://github.com/redis/node-redis/tree/master/benchmark
Oh I'm not challenging the original choice of ioredis by any means, the official client may not have even existed back then as far as I know. If we're to revise that decision now I think the official client might be the sensible option to choose, unless there are obvious reasons to avoid it (e.g. licensing or feature-completeness)
Oh I'm not challenging the original choice of ioredis by any means, the official client may not have even existed back then as far as I know.
I means the decision shown on the issue about using ioredis
instead of redis
is based on performance
and promise
support.
promise
should not be a problem nowadays.
performance
can be checked via https://github.com/redis/node-redis/tree/master/benchmark
It is a good time to re-visit the decision made in the past.
redis
seems to provide more advance and up-to-date feature according to the issue you provide.
I've forked ioredis to iovalkey because:
None of the options looks good right now. I would likely hold for now, and possibly switch if the need arises.
I don't like post with last updated date only, it seems to give a fault image to the reader the content is up-to-date and latest.
It's last update on Jan 2, 2024 and all the others scripts (including result image) is written or provided 3 years ago. I judge the conclusion in the article is still up-to-date.
Here the result of using https://github.com/poppinlp/node_redis-vs-ioredis
Operation | redis(ms) | redis with multi(ms) | redis with batch(ms) |
---|---|---|---|
set | 1489.4949 | 87.3262 | 83.2496 |
get | 1425.5875 | 79.2133 | 55.7291 |
hmset | 1447.7094 | 73.6554 | 71.0114 |
hgetall | 1505.95 | 91.0191 | 83.9859 |
incr | 1437.0455 | 62.8249 | 48.9377 |
keys | 1447.0564 | 88.7329 | 79.2056 |
Operation | ioredis(ms) | ioredis with multi(ms) | ioredis with pipeline(ms) |
---|---|---|---|
set | 1482.0976 | 82.4503 | 74.1322 |
get | 1465.7855 | 70.7754 | 58.4931 |
hmset | 1522.3452 | 96.9624 | 93.7096 |
hgetall | 1559.0312 | 92.6324 | 83.0306 |
incr | 1420.0616 | 57.512 | 97.9488 |
keys | 1499.0053 | 111.735 | 102.1956 |
redis
had an average time of 536.541ms
ioredis
had an average time of 553.884ms
The benchmark may differ between platform, but it does not provide an extreme performance boost between redis
and ioredis
.
cc @kibertoad
I see you open issue on that repo 2 days ago, you may take the above graph as reference.
I tested with redis@4
and ioredis@5
Well at least we know that performance is not an issue then
@climba03003 redis has autopipelining enabled by default, while ioredis doesn't. Can you check again with it enabled in ioredis?
Updated at 2024-04-02 23:00 (UTC+8)
I am a bit surprise on the result, after the second run with enableAutoPipelining
.
I do not expect redis
is even faster, but ioredis
becomes slower.
Running with redis before ioredis with enableAutoPipelining: true |
Operation | redis(ms) | redis with multi(ms) | redis with batch(ms) |
---|---|---|---|---|
set | 1432.3858 | 81.8358 | 69.8106 | |
get | 1328.0434 | 78.1827 | 73.8312 | |
hmset | 1491.0987 | 85.4957 | 67.8254 | |
hgetall | 1643.6342 | 81.7741 | 76.3429 | |
incr | 1528.9046 | 73.3113 | 50.5294 | |
keys | 1479.9891 | 101.0535 | 78.4952 |
Operation | ioredis(ms) | ioredis with multi(ms) | ioredis with pipeline(ms) |
---|---|---|---|
set | 1799.9244 | 100.9509 | 78.3812 |
get | 1960.0755 | 79.4042 | 114.0666 |
hmset | 1858.4125 | 114.2729 | 111.7964 |
hgetall | 1944.1496 | 101.6841 | 127.9608 |
incr | 1814.5 | 70.3818 | 67.1832 |
keys | 1866.9967 | 117.0985 | 102.4569 |
redis
had an average time of 545.697ms
ioredis
had an average time of 690.539ms
Running with ioredis
with enableAutoPipelining: true
before redis
Operation | redis(ms) | redis with multi(ms) | redis with batch(ms) |
---|---|---|---|
set | 1365.6306 | 82.9341 | 66.0203 |
get | 1362.5358 | 63.0452 | 62.4757 |
hmset | 1471.8367 | 79.0067 | 70.4595 |
hgetall | 1504.3269 | 88.4178 | 86.7542 |
incr | 1381.3638 | 56.7824 | 51.427 |
keys | 1303.7898 | 100.6238 | 77.3041 |
Operation | ioredis(ms) | ioredis with multi(ms) | ioredis with pipeline(ms) |
---|---|---|---|
set | 1743.7271 | 86.7177 | 73.0347 |
get | 1960.0755 | 79.4042 | 114.0666 |
hmset | 2019.5258 | 114.2276 | 112.0642 |
hgetall | 1926.7789 | 112.3453 | 87.6865 |
incr | 1791.4514 | 67.3637 | 67.4776 |
keys | 1793.8793 | 113.9202 | 98.0459 |
redis
had an average time of 515.263ms
ioredis
had an average time of 687.855ms
I don't have a strong opinion on the matter, but it does seem like if we go the ioredis route, iovalkey might be the better long term solution (if @mcollina agrees). Is the official redis client a drop in replacement as well? I can make another PR with that when I have time.
I don't know if we should land this to fastify-redis or create sth like fastify-iovalkey
Unless the API and protocol is different, I don't see a reason to create a completely new plugin with a different name.
I 100% expect the protocol to change in Redis 8.
Closing until its more clear how to approach the issue.
Swap to a default redis client with a higher chance of getting maintained. iovalkey is a friendly fork maintained by @mcollina. I'm guessing this would be a breaking/major version bump if this lands.
Checklist
npm run test
andnpm run benchmark