Azure / AKS

Azure Kubernetes Service
https://azure.github.io/AKS/
1.97k stars 308 forks source link

Control Plane Unavailable & Connectivity Issues #3773

Closed liamgib closed 6 months ago

liamgib commented 1 year ago

What happened: Hi, in two of our clusters we have seen concerning errors in our API Server diagnostics logs and are also observing operations to the control plane from within the cluster timeout or respond with a 503 Service Unavailable error.

In one of our mission critical clusters, we have seen errors of this nature for 100+ days. We've opened multiple Sev A tickets, but the support engineers are unable to provide support and have stopped responding.

{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"E0709 23:59:15.354676       1 controller.go:113] loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable\n","containerID":"0a82de83d27fe452a760871a2f46cc79e3374df3931a488de26623902da2ccf4","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-jwzwv"}, "resourceId": "/SUBSCRIPTIONS/<REDACTED>/RESOURCEGROUPS/<REDACTED>/PROVIDERS/MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/<REDACTED>", "time": "2023-07-09T23:59:15.354888317Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"I0709 23:58:50.943368       1 available_controller.go:474] \"changing APIService availability\" name=\"v1beta1.metrics.k8s.io\" oldStatus=False newStatus=True message=\"all checks passed\" reason=\"Passed\"\n","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-cqvwm","containerID":"eec30e6ebcb570babe73899a7a08232240531c5d2a8b9309993007ea675c9f88"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.948639807Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"I0709 23:58:51.342947       1 available_controller.go:474] \"changing APIService availability\" name=\"v1beta1.metrics.k8s.io\" oldStatus=False newStatus=True message=\"all checks passed\" reason=\"Passed\"\n","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-cqvwm","containerID":"eec30e6ebcb570babe73899a7a08232240531c5d2a8b9309993007ea675c9f88"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:51.343191632Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"E0709 23:58:51.357767       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again\n","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-cqvwm","containerID":"eec30e6ebcb570babe73899a7a08232240531c5d2a8b9309993007ea675c9f88"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:51.357894583Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"I0709 23:58:51.638729       1 available_controller.go:474] \"changing APIService availability\" name=\"v1beta1.metrics.k8s.io\" oldStatus=False newStatus=True message=\"all checks passed\" reason=\"Passed\"\n","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-cqvwm","containerID":"eec30e6ebcb570babe73899a7a08232240531c5d2a8b9309993007ea675c9f88"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:51.638884438Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"E0709 23:58:51.644609       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again\n","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-cqvwm","containerID":"eec30e6ebcb570babe73899a7a08232240531c5d2a8b9309993007ea675c9f88"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:51.645251007Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"collectedBy":"fluent-bit","log":"I0709 23:58:51.741385       1 available_controller.go:474] \"changing APIService availability\" name=\"v1beta1.metrics.k8s.io\" oldStatus=False newStatus=True message=\"all checks passed\" reason=\"Passed\"\n","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-cqvwm","containerID":"eec30e6ebcb570babe73899a7a08232240531c5d2a8b9309993007ea675c9f88"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:51.741529576Z"}

and

{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"pod":"kube-apiserver-86bbd744c-v44j7","collectedBy":"fluent-bit","log":"I0710 02:44:54.168481       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\n","containerID":"6cc4c2ea109f9c15b02eea115b0cd022ebbbf6fb55b5967f02b32a76b1a479e0","stream":"stderr"}, "resourceId": "REDACTED", "time": "2023-07-10T02:44:54.168634773Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"E0710 02:45:00.082070       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"error dialing backend: EOF\"}: error dialing backend: EOF\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:45:00.08221296Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"I0710 02:46:10.186965       1 trace.go:205] Trace[1332715394]: \"Call validating webhook\" configuration:azure-policy-validating-webhook-configuration,webhook:byovalidation.policy.azure.com,resource:templates.gatekeeper.sh\/v1beta1, Resource=constrainttemplates,subresource:,operation:UPDATE,UID:493b76cd-2060-4a83-afc9-6c926569de91 (10-Jul-2023 02:46:05.186) (total time: 5000ms):\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.187173349Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"Trace[1332715394]: [5.000663428s] [5.000663428s] END\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.18720825Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"W0710 02:46:10.187005       1 dispatcher.go:174] Failed calling webhook, failing open byovalidation.policy.azure.com: failed calling webhook \"byovalidation.policy.azure.com\": failed to call webhook: Post \"https:\/\/azure-policy-webhook-service.kube-system.svc:443\/validategatekeeperresources?timeout=5s\": context deadline exceeded\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.18721065Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"E0710 02:46:10.187037       1 dispatcher.go:181] failed calling webhook \"byovalidation.policy.azure.com\": failed to call webhook: Post \"https:\/\/azure-policy-webhook-service.kube-system.svc:443\/validategatekeeperresources?timeout=5s\": context deadline exceeded\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.18721295Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"I0710 02:46:10.192951       1 trace.go:205] Trace[924856332]: \"GuaranteedUpdate etcd3\" audit-id:dbc9e4d4-adc2-4969-87b7-9e8ad35c18ec,key:\/templates.gatekeeper.sh\/constrainttemplates\/k8sazurev3allowedusersgroups,type:*unstructured.Unstructured (10-Jul-2023 02:46:05.182) (total time: 5010ms):\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.193125738Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"Trace[924856332]: ---\"About to Encode\" 5003ms (02:46:10.187)\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.193157038Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"Trace[924856332]: [5.010030967s] [5.010030967s] END\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.193163938Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"I0710 02:46:10.193747       1 trace.go:205] Trace[711846624]: \"Update\" url:\/apis\/templates.gatekeeper.sh\/v1beta1\/constrainttemplates\/k8sazurev3allowedusersgroups,user-agent:azurepolicyaddon\/v0.0.0 (linux\/amd64) kubernetes\/$Format,audit-id:dbc9e4d4-adc2-4969-87b7-9e8ad35c18ec,client:10.241.0.45,accept:application\/json, *\/*,protocol:HTTP\/2.0 (10-Jul-2023 02:46:05.181) (total time: 5012ms):\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.193893049Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"Trace[711846624]: ---\"Write to database call finished\" len:8917,err:nil 5010ms (02:46:10.192)\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.193911949Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"Trace[711846624]: [5.012002096s] [5.012002096s] END\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:10.193914349Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"E0710 02:46:38.993347       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"error dialing backend: EOF\"}: error dialing backend: EOF\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:38.993478671Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"E0710 02:46:39.109105       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"error dialing backend: EOF\"}: error dialing backend: EOF\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:39.109280617Z"}

{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"containerID":"6bd74cd6b1c8007e4bf29dcbbf23cad861e6c7f5f14f05d5330de1f15af28a16","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-7gq94","collectedBy":"fluent-bit","log":"I0709 23:58:50.773710       1 available_controller.go:474] \"changing APIService availability\" name=\"v1beta1.metrics.k8s.io\" oldStatus=False newStatus=False message=\"failing or missing response from https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1: Get \\\"https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1\\\": write unix @->\/tunnel-uds\/proxysocket: write: broken pipe\" reason=\"FailedDiscoveryCheck\"\n"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.773823509Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"containerID":"6bd74cd6b1c8007e4bf29dcbbf23cad861e6c7f5f14f05d5330de1f15af28a16","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-7gq94","collectedBy":"fluent-bit","log":"E0709 23:58:50.785192       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1: Get \"https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1\": write unix @->\/tunnel-uds\/proxysocket: write: broken pipe\n"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.785331012Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"containerID":"6bd74cd6b1c8007e4bf29dcbbf23cad861e6c7f5f14f05d5330de1f15af28a16","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-7gq94","collectedBy":"fluent-bit","log":"E0709 23:58:50.787082       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1: Get \"https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1\": write unix @->\/tunnel-uds\/proxysocket: write: broken pipe\n"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.787231646Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"containerID":"6bd74cd6b1c8007e4bf29dcbbf23cad861e6c7f5f14f05d5330de1f15af28a16","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-7gq94","collectedBy":"fluent-bit","log":"I0709 23:58:50.966989       1 available_controller.go:474] \"changing APIService availability\" name=\"v1beta1.metrics.k8s.io\" oldStatus=True newStatus=False message=\"failing or missing response from https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1: Get \\\"https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1\\\": write unix @->\/tunnel-uds\/proxysocket: write: broken pipe\" reason=\"FailedDiscoveryCheck\"\n"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.967399829Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"containerID":"6bd74cd6b1c8007e4bf29dcbbf23cad861e6c7f5f14f05d5330de1f15af28a16","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-7gq94","collectedBy":"fluent-bit","log":"E0709 23:58:50.973806       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1: Get \"https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1\": write unix @->\/tunnel-uds\/proxysocket: write: broken pipe\n"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.973958145Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"containerID":"6bd74cd6b1c8007e4bf29dcbbf23cad861e6c7f5f14f05d5330de1f15af28a16","stream":"stderr","pod":"kube-apiserver-55fbb6f9b6-7gq94","collectedBy":"fluent-bit","log":"E0709 23:58:50.975449       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1: Get \"https:\/\/10.100.3.33:4443\/apis\/metrics.k8s.io\/v1beta1\": write unix @->\/tunnel-uds\/proxysocket: write: broken pipe\n"}, "resourceId": "REDACTED", "time": "2023-07-09T23:58:50.975587373Z"}

Note: The above logs are different snippets, as there are thousands of these errors an hour.

We've also seen the following errors when trying to port-forward services.

{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"E0710 02:46:38.993347       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"error dialing backend: EOF\"}: error dialing backend: EOF\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED, "time": "2023-07-10T02:46:38.993478671Z"}
{ "category": "kube-apiserver", "operationName": "Microsoft.ContainerService/managedClusters/diagnosticLogs/Read", "properties": {"log":"E0710 02:46:39.109105       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"error dialing backend: EOF\"}: error dialing backend: EOF\n","stream":"stderr","pod":"kube-apiserver-86bbd744c-4wzqh","collectedBy":"fluent-bit","containerID":"2f6d9d971b04955df3add39bc371b9be201548b85ffdd7908692e6762cac37c1"}, "resourceId": "REDACTED", "time": "2023-07-10T02:46:39.109280617Z"}

What you expected to happen: There should be no errors in our control planes, and operations to the API servers should succeed.

How to reproduce it (as minimally and precisely as possible): Unknown

Anything else we need to know?: This is impacting management operations such as kubectl port-forward and API server requests both internally and externally. When we see an increase of errors in the API server, we often see the following services crash as they can't talk to the API server which they rely on.

Environment:

ghost commented 1 year ago

Hi liamgib, AKS bot here :wave: Thank you for posting on the AKS Repo, I'll do my best to get a kind human from the AKS team to assist you.

I might be just a bot, but I'm told my suggestions are normally quite good, as such: 1) If this case is urgent, please open a Support Request so that our 24/7 support team may help you faster. 2) Please abide by the AKS repo Guidelines and Code of Conduct. 3) If you're having an issue, could it be described on the AKS Troubleshooting guides or AKS Diagnostics? 4) Make sure your subscribed to the AKS Release Notes to keep up to date with all that's new on AKS. 5) Make sure there isn't a duplicate of this issue already reported. If there is, feel free to close this one and '+1' the existing issue. 6) If you have a question, do take a look at our AKS FAQ. We place the most common ones there!

liamgib commented 1 year ago

It should be clarified that operations to the control plane do succeed, but these errors are causing a % of requests to fail at times.

ghost commented 1 year ago

Triage required from @Azure/aks-pm

ghost commented 1 year ago

Action required from @Azure/aks-pm

ghost commented 1 year ago

Issue needing attention of @Azure/aks-leads

mdnix commented 1 year ago

Any updates on this?

We are seeing similar issues in our GitLab Runner running on AKS

2023-10-03T14:14:56+02:00   WARNING: Retrying...                                error=error dialing backend: read unix @->/tunnel-uds/proxysocket: read: connection reset by peer job=654466 project=400 runner=n9AZ8sJG
microsoft-github-policy-service[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment.

microsoft-github-policy-service[bot] commented 9 months ago

Issue needing attention of @Azure/aks-leads

microsoft-github-policy-service[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment.

microsoft-github-policy-service[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment.

microsoft-github-policy-service[bot] commented 9 months ago

Action required from @merooney, @bmoore-msft.

microsoft-github-policy-service[bot] commented 9 months ago

Triage required from @Azure/aks-pm @merooney, @bmoore-msft

microsoft-github-policy-service[bot] commented 7 months ago

This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment.

microsoft-github-policy-service[bot] commented 6 months ago

This issue will now be closed because it hasn't had any activity for 7 days after stale. liamgib feel free to comment again on the next 7 days to reopen or open a new issue after that time if you still have a question/issue or suggestion.