Open scraperdragon opened 10 years ago
4) Downloading with/without backoff gives different errors.
In my opinion if there is default retry behaviour, it should be configured for 5xx status codes only. So I agree with dragon that a 404 should not retry. Obviously people wanted to retry on all 4xx codes too should be able to do so with suitable configuration.
+1 on the lack of information. It's not easy to get at the status code, if that's of interest. I'm currently having to use a separate request to retrieve this on failure.
Check out the 'backoff' library which may handle this better than my homebrew attempt ;)
— Sent on the move, please excuse typos.
On Tue, Mar 24, 2015 at 5:52 PM, Steven Maude notifications@github.com wrote:
+1 on the lack of information. It's not easy to get at the status code, if that's of interest. I'm currently having to use a separate request to retrieve this on failure.
Reply to this email directly or view it on GitHub: https://github.com/scraperwiki/data-services-helpers/issues/15#issuecomment-85619733
@paulfurley Looks nice; thanks for the tip.
1) Takes ages to
request_url()
due to retrying. Should probably fail straightaway on 404?2) Raises an incredibly generic, minimal information error:
RuntimeError: Max retries exceeded for <url>
rather than the original error. (Logs do contain more info)3) It's impossible to do any additional handling - e.g. "If it's a 404, that's fine; skip to the next item"