Open cypriss opened 8 years ago
@cypriss thank you for reporting this issue and I'm sorry it has taken so long to reply. Jobs has not been a main focus for me for quite some time now. I now understand why this is happening.
Jobs performance scales terribly with queue size. If you look at the Lua script for popping the next available jobs off the queue, there are two ZUNIONSTORE
commands and one ZINTERSTORE
command. The performance of these commands is sensitive to the size of the input sets (O(N)
) and the size of the resulting output set (O(M*log(M))
). When the queue is large, both the input and output sets have high cardinality and therefore the script to pop jobs off the queue runs incredibly slowly. I should have benchmarked Jobs with larger queue sizes in order to catch this issue sooner.
There are a few ways we could fix this, including:
O(N)
performance instead of O(N*log(N))
, which won't matter much considering other libraries are able to pop jobs in a way that scales independent of queue size. The problem is that Jobs currently allows arbitrary priority levels and stores them in a sorted set where the members are job ids and the scores are the corresponding priorities.O(P + J)
, I think). Users who don't need different priority levels can set it to 1 and expect similar performance to competing libraries.I believe 2 is the best option but would involve breaking backwards compatibility. I can't make any guarantees about when I will have time to do this, but I would be more than happy to review a PR if someone wants to take this on.
I benchmarked a few job processing libraries that use Redis as a backend: https://github.com/gocraft/work#benchmarks
Most can do about 10,000 to 20,000 jobs/second. Using default options, the albrow/jobs library clocked in at 40 jobs/second. I was able to increase that by increasing the # of works and the batch size, but I wasn't sure what you think good values for those params are.
Benchmark code: https://github.com/gocraft/work/blob/master/benches/bench_jobs/main.go
I'd love it if you could review my benchmark to see if I made a glaring mistake, and also what your thoughts are about good params for batch size.