Closed BrainSlugs83 closed 2 years ago
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @Azure/aks-pm
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @Azure/aks-pm
Hey I wanted to bump this with some more information -- this is still occurring and my team and I suspect the az commands themselves are what's generating too much traffic; I suspect they are polling too often to try and get the result of the operation they performed.
Is there a setting or a part of the code I can look at to see where the polling for the result is happening? -- I'd like to be able to track this issue down; as it is kind of making AKS completely unusable.
Azure CLI internally uses msrestazure and msrest for long running operation polling.
Judging from the error, the throttling is triggered by HighCostGetVMScaleSet30Min
. I checked the code of msrestazure but it doesn't seem to handle throttling error 429's Retry-After
from Compute Resource Provider.
def _raise_if_bad_http_status_and_method(self, response):
"""Check response status code is valid for a Put or Patch
request. Must be 200, 201, 202, or 204.
:raises: BadStatus if invalid status.
"""
code = response.status_code
if code in {200, 202} or \
(code == 201 and self.method in {'PUT', 'PATCH'}) or \
(code == 204 and self.method in {'DELETE', 'POST'}):
return
raise BadStatus(
"Invalid return status for {!r} operation".format(self.method))
@lmazuel, could you please help confirm?
At the same time, I am reaching to ARM and AKS team to check the root cause for the throttling.
This 429 throttling is a known issue with AKS calling VMSS, which we are investigating as part of the service issue.
A support request is in flight, SR: 120012323000897.
I am going to close this issue as it is not CLI related and we have oncall investigating.
@jluk -- regardless of the service request. I believe that the command line tool is also not handling 429 responses appropriately, according to MSDN (which I linked above). And it still leaves the user with no recourse, which is unfortunate.
Obviously it would be great if the service issue was solved, but regardless, I believe that the tool may be behaving inappropriately (per the above); and I would recommend you leave this bug open and part of the backlog (if 429 was handled appropriately, it could help to mitigate issues like this in the future).
By the way, regarding the SR, is there any ETA on fixing this issue? -- I have my own SR open, but it's been 6 days and I've yet to hear of any progress on the issue.
Since cli core is already tagging an owner to investigate, will reopen and defer to them to handle issue closure.
The VMSS throttling is still under investigation, we've made multiple improvements in the cloud provider for latest 1.15 patch versions and latest CA addon versions. There is an additional set of work we are engaging for ETAs, please look to azure/aks for an issue summary in the coming days.
Thanks @jluk, much appreciated! 🙂
Similar here:
VMSSAgentPoolReconciler retry failed: autorest/azure: Service returned an error. Status=429 Code=\"OperationNotAllowed\"
Message=\"The server rejected the request because too many requests have been received for this subscription.\"
Details=[{\"code\":\"TooManyRequests\",\"message\":\"{\\\"operationGroup\\\":\\\"HighCostGetVMScaleSet3Min\\\",\\\"startTime\\\":\\\"2020-01-27T17:20:31.4086198+00:00\\\",\\\"endTime\\\":\\\"2020-01-27T17:22:31.4086198+00:00\\\",
\\\"allowedRequestCount\\\":200,\\\"measuredRequestCount\\\":472}\",\"target\":\"HighCostGetVMScaleSet3Min\"},
{\"code\":\"TooManyRequests\",\"message\":\\{\\\"operationGroup\\\":\\\"HighCostGetVMScaleSet30Min\\\",\\\"startTime\\\":\\\"2020-01-27T17:20:31.4086198+00:00\\\",\\\"endTime\\\":\\\"2020-01-27T17:45:00+00:00\\\",\\\"allowedRequestCount\\\":1002,\\\"measuredRequestCount\\\":3030}\",\"target\":\"HighCostGetVMScaleSet30Min\"}]
InnerError={\"internalErrorCode\":\"TooManyRequestsReceived\"}
the only difference is that two errors thrown, for HighCostGetVMScaleSet30Min
and for HighCostGetVMScaleSet3Min
. I was running terraform to add a node pool.
Found issue in Azure health center, will try disabling autoscaler for now
Also of note -- the allowedRequestCount
is crazy low today -- 200 for you -- and 270 for me (before it was ~600, then ~1300, now ~200). -- Not sure why the allowed request count is going so crazy.
EDIT: My bad: there are multiple allowedRequestCount
s in the response -- looks like our values are similar. Nevermind. 🤔
FYI -- for anyone experiencing this issue: https://github.com/Azure/AKS/issues/1413
I have raised a support ticket to microsoft and got the below recommendation which worked for now. microsoft is working to fix in later version of aks.
As discussed with the author of msrest:
429 is honored as part of the regular requests/urllib3 integration: https://github.com/urllib3/urllib3/blob/728d9244665ef5b03103cb74d7b409ebe4f23b43/src/urllib3/util/retry.py#L154
It will be honored only if there is a Retry-After
header though, as per HTTP spec. You will not find this code in the ARM implementation, because it’s not ARM specific (ARM polling will contain only code that are relative to ARM interpretation of HTTP).
If your 429 got a Retry-After
, they likely have been exhausted before you actually see the error. Receiving a 429 exception doesn’t mean you didn’t retried, it means you’re out of retries.
If you reach 429 because of ARM quota limit and you followed all your retries and Retry-after
, there is nothing much the SDK can do except failure. Think about it and the possible implication if we try to do something, if you reach your hourly quota, what would you expect the SDK to do then, wait for 30 minutes until the quota is back? Especially if there is parallel usage of this subscription, it’s not even a guarantee that you get the quota back at the end of the calculated time. And then, you wait even more? https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#error-code
We will drive AKS team to fix the polling issue https://github.com/Azure/AKS/issues/1413 on the service side.
Hey thanks for responding, and following up with the AKS team. -- Apologies for the long-wait, we were engaging with Azure support finally, and this fell off my radar.
FYI: It looks like the original issue was not that the service was reporting 429 back to az.exe -- it was the AKS load balancer was encountering 429 issues on it's own -- and that was being reported to the az.exe tool. -- It's odd though, since the deployment might have succeeded, with the load balancer in a good state, but that the command would fail with that issue; it leaves a scripter in a very awkward state (see item 3 below).
But, with that in mind, it may be that the tool is handling these requests appropriately and that this bug is moot. (I'm unable to confirm either way. -- But if y'all believer 429 is handled appropriately by az.exe, then by all means, feel free to re-close this bug.) 🙂
FYI: per the RFC 6585 Retry-After
is optional (as indicated by the word MAY
in all uppercase characters). -- Don't get me wrong; it's really nice when it's present, and I'm of the opinion that if you write a service that returns 429, you really should include it, but from an "HTTP spec" perspective, it's 100% optional. -- It doesn't mean you are out of retries, it means you are being throttled; you've made too many requests, and you should retry again later. The response has the ability to define when that later is via Retry-After
, but it's not required to do so.
Generally, when later is not defined, in a long running service environment, we use exponential back-off (as documented in the MSDN documentation that I previously linked). -- Because an extra 30 minutes (or even 3 hours), is no big deal. -- The usual strategy we use with Azure is that if a Retry-After
is present, we honor it, and if not, we use the exponential back-off strategy. (This was even recommended to us by Azure engineers in the past, and it works well in my experience -- we ran a service that managed, and provisioned several hundred ARM VM environments a day with that strategy.)
A good fall-back strategy if you need to "give up" after a certain amount of time or retry intervals, is to make the commands idempotent. -- So that a user can retry the command later on their own, and resume where they left off -- this was the other part of the issue that we encountered. -- Basically a command failed, and even waiting a day and re-running it, it would still fail, because the command was not idempotent. (Perhaps adding a --force
or other switch could fix that in this case.)
As for item 2, I think it is hard-coded in msrestazure to only retry 4 times. @lmazuel, do you think if it is possible to expose this config so that users can control the behavior? Maybe via environment variables.
As for item 3, indeed it is good practice to make az commands idempotent. Commands like az storage account create
does follow this pattern. @jluk, is it possible for your team to make this command idempotent as requested?
I have a similar problem happening with az storage list
. The command will fail with a 429 message and does not appear to respect the throttling:
ValidationError: ("{\"error\":{\"code\":\"ResourceCollectionRequestsThrottled\",\"message\":\"Operation 'Microsoft.Storage/storageAccounts/read' failed as server encountered too many requests. Please try after '17' seconds. Tracking Id is '78db6dda-b7da-45cf-b763-202cdedf94f4'.\"}}")
This cascades down and breaks items that depend on the Azure CLI such as Terraform.
While I believe the source of the problem is the actual backend (the list request is only 4 actual requests), the Azure CLI should have a way to handle this gracefully.
Edit: Removed the Terraform note. They're using the REST / Go SDK so it appears to be the same issue with simply listing the accounts but not directly related.
@BrainSlugs83 Apologies for the late reply. This issue is open for quite sometime now. Please let us know if you are still looking for assistance on this. Awaiting your reply.
@navba-MSFT I have same issue when running following,
az rest -m DELETE -u 'https://management.azure.com/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Web/sites/<web app name>/instances/<instance id>/processes/<process id>?api-version=2020-12-01'
Result:
ERROR: Too Many Requests({"Code":"TooManyRequests","Message":"Too many requests. Please try again later.","Target":null,"Details":[{"Message":"Too many requests. Please try again later."},{"Code":"TooManyRequests"},{"ErrorEntity":{"Code":"TooManyRequests","Message":"Too many requests. Please try again later."}}],"Innererror":null})
, but the process is removed.
I want to confirm it works with $LASTEXITCODE
in powershell to verify if the command succeeded, but it is not and my script fails.
Workaround:
Do not check $LASTEXITCODE
and trust it is removed, which is a bit odd.
Hi, we're sending this friendly reminder because we haven't heard back from you in a while. We need more information about this issue to help address it. Please be sure to give us your input within the next 7 days. If we don't hear back from you within 14 days of this comment the issue will be automatically closed. Thank you!
I strongly approve adding a mechanism to configure the retry logic of calls when backend errors like HTTP 429. It could be personalizable with a global flag, like --retry-count=[int]
and --retry-max-secs=[int]
.
As of today, designing scripts with idem-potency is really hard when dealing with HTTP 429 errors.
Could someone please reopen this issue? As @clemlesne stated, adding a global flag option would be very beneficial to the community. Otherwise, users will have to provide an alternative method to circumnavigate this missing feature, which is less than ideal.
Describe the bug
Running az commands can generate 429 "too many requests" exceptions from Azure (possibly related to
az aks
? or possibly all commands -- I've definitely seen this at random from Azure before). It seems this happens with long running commands after they have already executed and az is polling for a result from Azure.Ideally, when this happens, az should just exponentially backoff (i.e. increase the timeout and try again). (Sometimes in the 429 response, there is even a
Retry-After
header that tells you exactly how long to wait!)IMO, the REAL issue is that, you get back a failure message, and the command aborts, with no results -- even if the command was successful (e.g. you can't even just try to rerun the command at that point). -- Basically, the command shouldn't throw a perma-error unless it has actually failed. If the command is still running and might possibly succeed but you just failed to poll for a result, you should do a backoff and retry.
Command Name
az aks nodepool add --resource-group MyResourceGroup --cluster-name MyClusterName --os-type Windows --node-vm-size "Standard_B2s" --name window --node-count 2 --kubernetes-version 1.13.12 --min-count 2 --max-count 6 --enable-cluster-autoscaler
Errors:
To Reproduce:
Steps to reproduce the behavior.
Expected Behavior
Az.exe shouldn't fail when the initial command turns out to be successful -- because it leaves the user in an unrecoverable state (e.g. the initial command appears to have failed, there is no output results, and re-running the command also fails, because now the resource exists! -- so you not only don't handle the 429 yourself, but you prevent the user from handling it too!).
Specifically, calls to Azure made by Az.exe which return a 429 status should have transient fault handling baked in -- as specified by MSDN, best practices for cloud applications:
All applications that communicate with remote services and resources must be sensitive to transient faults.
Environment Summary
Additional Context