Open 0xDEC0DE opened 9 months ago
This is not a bug; it is 100% intentional behavior, with comments in the set_concurrency_keys.lua script explaining that it is done on purpose.
This is the only place that sets up the key, so naturally it's going to override the current value if one was set from a different client.
I'd advocate for a second key that allows user to set their own limits, effectively as an override for the code's own limit. We can amend the Lua script so it looks for that key first and falls back to the real one. Users could potentially add a TTL on the second key to get the "temporary behaviour" action.
In fact I'd say let's add a tool script to do this, so that the user doesn't need to know about our Redis keys. Something like:
spinach max_concurrency myjobname 32 1m
to set myjobname job to 32 concurrency for 1 minute.
This one is a bit complicated, but:
Preamble
A test case script:
Steps to reproduce
docker-compose -f spinach/tests/docker-compose.yml up -d
redis-cli
and execute the commandHSET spinach/_max_concurrency nap 32
Expected result
The second run of the script runs all 32 tasks simultaneously, like we told it to
Actual behavior
The second run of the script processes the queue 8-at-a-time
Miscellany
set_concurrency_keys.lua
script explaining that it is done on purpose.max_concurrency
set to "baseline" values, and Operations occasionally has a need to tune the values up or down, for reasons.flask_spinach
running under Gunicorn; when Gunicorn cycles out workers, the new workers start and destroy any runtime adjustments to the Redis that operators may have made; and it does so with no warning._max_concurrency
key, but Issue #15 makes it not work very well.Any thoughts and insights on how to best address this would be appreciated.