daniel-espinoza-fc / google-api-python-client

Automatically exported from code.google.com/p/google-api-python-client
Other
0 stars 0 forks source link

Exponential backoff for access token requests #210

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Share the refresh token with 10+ machines.
2. Perform an authenticated API request across the same machines simultaneously.

What is the expected output? What do you see instead?
Eventual success. Access denied error.

What version of the product are you using? On what operating system?
oauth2client-1.0.0; any

Please provide any additional information below.
http://code.google.com/p/google-api-python-client/source/browse/oauth2client/cli
ent.py#616

The problem is that if you have many machines hitting the token endpoint for 
the same refresh token, you will hit the rate limit and further requests will 
yield 403 without any computer-readable indication that you've hit a rate limit 
(there is human-readable HTML in the response that so indicates). We need to 
treat 403s as if they were a 429 (http://tools.ietf.org/html/rfc6585) and, 
without a Retry-After field, just use exponential backoff to try and get a 
refresh token. (If a Retry-After field were available, use it to establish a 
baseline for adding extra delay.)

This will not fix the underlying problem, but it will amelioriate it, and 
slowly machines will be able to start work.

Original issue reported on code.google.com by nherr...@google.com on 1 Nov 2012 at 6:17

GoogleCodeExporter commented 8 years ago
I understand the problem, but exponential backoff is not a solution.

Credentials always checks the Storage for an updated credential before 
refreshing:

  http://code.google.com/p/google-api-python-client/source/browse/oauth2client/client.py#572

Are the 10+ machines not reading the Credential from a common Storage instance?

Original comment by jcgregorio@google.com on 1 Nov 2012 at 6:27

GoogleCodeExporter commented 8 years ago
Exponential backoff is the only solution when you are using a shared refresh 
token.

The 10+ machines are not reading the Credential from a common Storage instance. 
The common case is that a single storage is persisted and copied, and then all 
10+ instances operate on their copy.

Enforcing that all clients managed shared refresh token in a single common 
Storage (so they can also share access tokens) is neither documented nor 
obvious, nor is there code to nicely do so.

Original comment by nherr...@google.com on 1 Nov 2012 at 7:49

GoogleCodeExporter commented 8 years ago
"""
The 10+ machines are not reading the Credential from a common Storage instance. 
The common case is that a single storage is persisted and copied, and then all 
10+ instances operate on their copy.
"""

Don't do that :)

"""
Enforcing that all clients managed shared refresh token in a single common 
Storage (so they can also share access tokens) is neither documented nor 
obvious, nor is there code to nicely do so.
"""

So let's fix *that* problem. What are the available shared storage mechanisms 
available to you? Memcache? A database? Other?

Original comment by jcgregorio@google.com on 1 Nov 2012 at 7:53

GoogleCodeExporter commented 8 years ago
Oh, I heartily concur. We need to fix that problem _generally_ (across language 
clients and classes of customers), even if we have some specific optimizations 
to people who have access to AppEngine's robot credential, Compute Engine's 
metadata service, or some kind of global locking service a la Chubby -- more 
importantly, we need to fix that problem for developers who have none of the 
previous.

That said, that's an entirely separate issue, and it does not address the one 
when people have not (due to lack of a solution, or lack of awareness of the 
solution, or lack of time to implement the solution) solved the problem and run 
into this case. Either we should emit an appropriate Exception (e.g., that 
you're rate limited) in the cases where the developer wants full control over 
when/how HTTP requests are made, or nicely handle it for them, when the 
developer wants a fire-and-forget request that just does the work and deals 
with server error retries, etc.

If (when?) we come up with a solution to the above issue, I'm assuming I'll 
come back and make language-specific documentation requests about how best to 
implement it in, say, Python. If you want to work from the specific to the 
general, let me know the new issue you file and I'll track it as well.

Original comment by nherr...@google.com on 1 Nov 2012 at 8:51

GoogleCodeExporter commented 8 years ago
403's are no longer accepted as challenges:

https://code.google.com/p/google-api-python-client/source/detail?r=accbae09fb505
f9ee30bd67f4801a903c70ce6a5

Original comment by jcgregorio@google.com on 12 Feb 2013 at 8:54

GoogleCodeExporter commented 8 years ago
Any reason why 403's are no longer accepted? The server still responses with 
403's if the client sends requests too quickly. How do we deal with this?

Original comment by wonder...@google.com on 10 Feb 2014 at 11:14