Open froOzzy opened 2 years ago
Do you pickle/unpickle the lock instances?
Or use anything that might do that (multiprocessing, ipc frameworks etc)?
I use this library in conjunction with Django == 3.2 to prevent simultaneous execution of the operation. Example:
def lock_user_request(func): # decorator for blocking double removal
@wraps(func)
def wrapped(*args, **kwargs):
...
lock_key = 'key'
redis_conn = redis_connection(strict=True)
lock = redis_lock.Lock(redis_conn, lock_key, expire=60)
if lock.acquire(blocking=False):
result = func(request, *args, **kwargs)
try:
lock.release() # The error occurs here
logger.info('release %s', lock_key)
except Exception:
...
return result
else:
raise redis_lock.NotAcquired('Can not acquire key')
@lock_user_request
@transaction.atomic
def drop(data): # some function for deleting data
...
return True
def func(request): # view
...
success_drop = drop(data)
...
Moreover, this error does not always appear, but periodically
Do you use this in a thread-based webserver? I wonder if there's some race condition going on somewhere.
I wonder if 0db1d1d makes your problem go away.
Thank you for your help, I will try to check your changes as soon as possible
Good afternoon, I'm sorry that I checked your changes for a long time, a lot of work by the end of the year, but unfortunately these changes did not help.
Interestingly, in the tests I was able to get around this problem using a fixture.
@fixture
def mock_reset_all_script(mocker: MockerFixture) -> None:
mocker.patch('redis_lock.reset_all_script', None)
I run tests with pytest and pytest-xdist.
@froOzzy So this is a problem that appears in your test suite? Can you extract a reproducer?
In production and more recently (after adding the pytest-testmon library) and CI/CD
Well I need more info. Are you pickling the lock instances or anything strange?
For me now this is not a burning problem, so in my free time I will assemble a mini test project for you, on which I will reproduce this problem, so that you have everything you need to fix the problem (in production I also found how to get around the problem)
I am experiencing the same issue after upgrading from 3.5.0 to 4.0.0 Currently not able to reproduce yet as this happens only when running my tests on Travis. I will try to get more info, but was wondering if there's any update on this?
I worked around this issue by subclassing redis_lock.Lock
and overriding Lock.register_scripts()
as an instance method, rather than a class method, which sets the script variables on each instance. This seems to have fixed the problem for us (so far), and it doesn't seem to add any significant overhead.
def register_scripts(self, redis_client):
redis_lock.reset_all_script = redis_client.register_script(redis_lock.RESET_ALL_SCRIPT)
self.unlock_script = redis_client.register_script(redis_lock.UNLOCK_SCRIPT)
self.extend_script = redis_client.register_script(redis_lock.EXTEND_SCRIPT)
self.reset_script = redis_client.register_script(redis_lock.RESET_SCRIPT)
self.reset_all_script = redis_client.register_script(redis_lock.RESET_ALL_SCRIPT)
Another idea would be to set these as variables in the global scope since there is nothing about them that would differ between instances anyways.
I'm using python-redis-lock==4.0.0, this error occurred once.
I use this in fastapi to prevent duplicate calls to the interface.
When using redis_lock , the following errors began to be received:
The latest version of python-redis-lock 3.7.0 is used. Added logging of this error and found that all the functions that are registered at the very beginning are not registered https://github.com/ionelmc/python-redis-lock/blob/8e1872e56375d707c7ffa739511b555a5639f821/src/redis_lock/__init__.py#L170
A line from the log:
Please help me to sort out the problem 🙏