Closed yjwong closed 4 years ago
That would be a great pull request to add, with a connection option to configure that COUNT!
Let me know if you need help creating that pull request.
Hello @yjwong! I'm checking in to see if you need help working on this pull request.
Can you please evaluate https://github.com/actionhero/node-resque/pull/330 and let me know if that works for you?
We have a Redis instance with approximately ~1.3 million keys. With this number of keys, anything that relies on
connection.getKeys()
(https://github.com/actionhero/node-resque/blob/master/src/core/connection.ts#L93), includingqueue.allDelayed()
,queue.locks()
,queue.stats()
and even scheduler polling can get very slow, taking over 30 seconds. This is becauseredis.scan()
gets invoked hundreds of thousands of times whenconnection.getKeys()
gets called.connection.getKeys()
uses theSCAN
command in Redis. When usingSCAN
, theCOUNT
parameter is set to 10 by default. Since the function is recursive, the stack depth can get really deep even with 10k keys. We have found success in increasingCOUNT
to a higher value, such as1000
, providing > 10-15x speedup in such a Redis instance. Would it make sense to increase this value innode-resque
when callingscan()
?Default value (10): ~40s 500: ~5s 1000: ~3.75s 5000: ~3.2s 10000: ~2.9s