googleapis / google-auth-library-python

Google Auth Python Library
https://googleapis.dev/python/google-auth/latest/
Apache License 2.0
784 stars 309 forks source link

Transient RefreshError exceptions from compute_engine credentials are not retried #1562

Open droyo opened 4 months ago

droyo commented 4 months ago

It's not unheard of for requests to the GCE metadata server to fail with a transient error, like a 503, 500, or 429. These requests can and should be retried, and they are in certain code paths. However, one very important code path, the credentials.refresh(...) method for credentials taken from the GCE metadata server, does not.

Environment details

Steps to reproduce

  1. Setup an http proxy on http://localhost:8080/ to inject transient 429 errors. Here is an example that will make every other request to a /token endpoint fail with status code 429.
  2. Save getcreds.py:
    
    import google.auth
    import google.auth.transport.requests

import logging

logging.getLogger('google.auth').setLevel(logging.DEBUG) logging.getLogger('google.auth').addHandler(logging.StreamHandler())

request = google.auth.transport.requests.Request() creds, project = google.auth.default() creds.refresh(request)

4. Run `while true ; do http_proxy=http://localhost:8080/ python3 getcreds.py; sleep 1; done`
5. The 1st or second request should fail. Here is some example output:

Checking None for explicit credentials as part of auth process... Checking Cloud SDK credentials as part of auth process... Cloud SDK credentials not found on disk; not using them Making request: GET http://169.254.169.254 Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/REDACTED/token Traceback (most recent call last): File "/home/REDACTED/.local/lib/python3.9/site-packages/google/auth/compute_engine/credentials.py", line 127, in refresh self.token, self.expiry = _metadata.get_service_account_token( File "/home/REDACTED/.local/lib/python3.9/site-packages/google/auth/compute_engine/_metadata.py", line 351, in get_service_account_token token_json = get(request, path, params=params, headers=metrics_header) File "/home/REDACTED/.local/lib/python3.9/site-packages/google/auth/compute_engine/_metadata.py", line 243, in get raise exceptions.TransportError( google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/REDACTED/token from the Google Compute Engine metadata service. Status: 429 Response:\nb'Too many requests\n'", <google.auth.transport.requests._Response object at 0x7fe0a13849d0>)

Note how only 1 attempt is logged for the `/token` endpoint.

We do retry on certain types of errors: https://github.com/googleapis/google-auth-library-python/blob/d2ab3afdb567850121fec7de1d86fb5fb0fa80ed/google/auth/compute_engine/_metadata.py#L199-L212

But in the scenario I'm complaining about, the HTTP request completes successfully, but the response code indicates a transient error, so we hit this code path: 

https://github.com/googleapis/google-auth-library-python/blob/d2ab3afdb567850121fec7de1d86fb5fb0fa80ed/google/auth/compute_engine/_metadata.py#L235-L242

The following patch to `google/auth/compute/_metadata.py` "fixes" the reproduction:

--- _metadata.py.orig 2024-07-25 01:51:12.567167923 +0000 +++ _metadata.py 2024-07-25 01:51:57.026072333 +0000 @@ -28,6 +28,7 @@ from google.auth import environment_vars from google.auth import exceptions from google.auth import metrics +from google.auth import transport

_LOGGER = logging.getLogger(name)

@@ -202,7 +203,11 @@ while retries < retry_count: try: response = request(url=url, method="GET", headers=headers_to_use)

I put "fixes" in quotes because in a real failure, a transient error is likely caused by the GCE metadata server or one of its dependencies being overwhelmed, and some degree of exponential backoff should be used. The existing logic makes sense for a timeout, because some time has already been spent waiting.

A separate but related request would be for the RefreshError raised to have the retryable property set appropriately, so library users can decide what to do on transient failures.

clundin25 commented 4 months ago

I think this is a reasonable thing to add. This repo has an exponential backoff implementation that can be used for retries.

droyo commented 4 months ago

If it helps, I did a brief survey of implementations of this library in other languages.

The Go implementation does not seem to retry on 429 error.

The Java implementation also does not seem to retry, but it looking at this mapping the authors may have intended for a 429 response to be retryable.

I tried looking into the NodeJS implementation but lost interest. I think it would be retried.