I wanted to implement different wait behaviours depending on which exception was raised when calling a given function.
I wrap the function with @retry twice but I noticed that the inner retry will get its attempt/state reset every time the outer retry catches an exception.
This then means that an already registered failed attempt for exception in the inner decorator constantly starts again.
You can actually see in the logs below that the retry call state object is constantly changing for the inner decorator, but is kept the same for the outer one and thus it can keep counting the attempts.
I assume simply that's how it works in python and the 2 decorators do not share retry state.
Maybe this is desired outcome or maybe I am abusing what tenacity is meant for. Just wanted to check with the rest of the community and if this is considered a bug/feature - let's agree on it :).
-------------------------------- live log call ---------------------------------
2023-07-09 16:22:43 [ WARNING] <RetryCallState 4416663328: attempt #1; slept for 5.0; last result: failed (SystemExit 429 too many requests)> (retrying.py:18)
2023-07-09 16:22:48 [ WARNING] <RetryCallState 4416661600: attempt #1; slept for 1.0; last result: failed (SystemExit some other exception)> (retrying.py:18)
2023-07-09 16:22:49 [ WARNING] <RetryCallState 4416663328: attempt #2; slept for 10.0; last result: failed (SystemExit 429 too many requests)> (retrying.py:18)
2023-07-09 16:22:54 [ WARNING] <RetryCallState 4416664240: attempt #1; slept for 1.0; last result: failed (SystemExit some other exception)> (retrying.py:18)
FAILED [100%]
end of stack trace:
def mock_get():
nonlocal item_number
exception = exceptions[item_number]
item_number += 1
> raise exception
E SystemExit: 429 too many requests
I wanted to implement different wait behaviours depending on which exception was raised when calling a given function. I wrap the function with
@retry
twice but I noticed that the inner retry will get its attempt/state reset every time the outer retry catches an exception. This then means that an already registered failed attempt for exception in the inner decorator constantly starts again.You can actually see in the logs below that the retry call state object is constantly changing for the inner decorator, but is kept the same for the outer one and thus it can keep counting the attempts.
I assume simply that's how it works in python and the 2 decorators do not share retry state.
Maybe this is desired outcome or maybe I am abusing what tenacity is meant for. Just wanted to check with the rest of the community and if this is considered a bug/feature - let's agree on it :).
Example
Logs:
end of stack trace: