Closed leeren closed 6 years ago
I'm not incredibly familiar with Retry
but this seems to be the expected behavior; that each request gets 10 tries and that on each attempt a new Retry
object is created, since they are meant to be reusable. I'm not sure there's an answer using Retry
alone. However, it should be possible to write your own adapter for this purpose.
Here's my attempt at one possible solution. This may work for you, depending on your requirement for when and the order in which requests will be retried. Use/adjust at your own discretion.
def map_with_retries(requests, *map_args, max_retries=0, **map_kwargs):
"""
Wraps grequests.map
First calls grequests.map on the requests
Afterwards, retries any failed requests up to `max_retries` recursively
"""
retry_queue = []
retry_count = 0
_ex_handler = map_kwargs.pop('exception_handler', None)
def ex_handler(request, ex, *args, **kwargs):
nonlocal retry_count
if retry_count < max_retries:
retry_queue.append(request)
retry_count += 1
if _ex_handler:
# call the original exception handler, if there was one
return _ex_handler(request, ex, *args, **kwargs)
# Map the initial requests
for result in grequests.map(requests,
*map_args,
exception_handler=ex_handler,
**map_kwargs):
yield result
# Now do the retries, if any
if retry_queue:
remaining_retries = max_retries - retry_count
for result in map_with_retries(retry_queue,
*map_args,
max_retries=remaining_retries,
exception_handler=_ex_handler,
**map_kwargs):
yield result
We'll still use Retry
but set the total to 0, so it will raise an exception immediately. the logic in map_with_retries
will handle doing the actual retries.
sesh = requests.Session()
retry = Retry(total=0, status_forcelist=(500))
adapter = HTTPAdapter(max_retries=retry)
sesh.mount('http://', adapter)
sesh.mount('https://', adapter)
rlist = [grequests.get('https://httpbin.org/status/500', session=sesh),
grequests.get('https://httpbin.org/status/200', session=sesh)]
results = list(map_with_retries(rlist, max_retries=5))
# [None, <Response [200]>, None, None, None, None, None]
All that said, I don't think this exactly an issue with grequests. If you feel differently or have any other details to elaborate on, we can reopen this.
Thanks a lot for the potential workaround! This was very helpful.
When issuing multiple
grequest
calls concurrently (i.e. viamap
), all sharing the same session, the configured maximum number of retries are not shared for each session but per request. To illustrate what I mean, I initiate sessions in the following fashion:Creating the requests are then done through something like the following:
Now, when I call
grequests.map()
, what I find is that every request will individually retry up to 10 times, instead of sharing the maximum retry value of 10 (which is what I want). Is this because each request creates its own session? But the session field of each request points to the same memory location. The problem seems to be because on eachRetry()
object'sincrement
, a newRetry()
object is created and not successfully referenced by existing requests. Is there any way, then, to achieve would I would like to accomplish within thegrequest
framework, i.e. allocating a total number of retries to be shared among all async requests?