Open jupacaza opened 2 years ago
route to CXP team
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @Azure/aks-pm.
Author: | jupacaza |
---|---|
Assignees: | - |
Labels: | `Service Attention`, `AKS`, `customer-reported`, `Auto-Assign` |
Milestone: | Backlog |
Hi @jupacaza, what you are encountering is the ARM limit on the content size of the request body. I didn't find a public document describing this limit, but I did some tests and found that the limit is roughly below 200KB.
For the aks command invoke
command, it will compress the file that needs to be uploaded, but in your case, it still exceeds the limit. By design, this command is not suitable for complex scenes.
Experiencing this issue for us as well starting from 29th July 22 6.22 pm BST. Before it used to work for uncompressed package of 564k.
We use cert-manager as referred in below example with modifications to support private AKS cluster by using az aks command invoke. https://docs.microsoft.com/en-us/azure/aks/ingress-tls?tabs=azure-cli
helm version is 3.5.4 and cert-manager chart version pinned down to v1.8.0
cert-manager helm chart was imported into ACR registry and in future we intend to not keep it publicly open over network to download charts.
export HELM_EXPERIMENTAL_OCI=1
helm pull oci://$(acrRegistry)/helm/infrastructure/$(JETSTACK_CERT_MANAGER_HELM_PACKAGE) --version $(JETSTACK_CERT_MANAGER_HELM_VERSION) --untar
az aks command invoke --resource-group $(AksResourceGroup) --name $(AksInstance) --command " \
helm upgrade \
--install \
--wait \
--namespace $(AksNameSpace) \
--version $(JETSTACK_CERT_MANAGER_HELM_VERSION) \
--set global.imagePullSecrets[0].name=acr-secret \
--set installCRDs=true \
--set nodeSelector.\"kubernetes\.io/os\"=linux \
--set image.repository=$(acrRegistry)/infrastructure/$(CERT_MANAGER_IMAGE_CONTROLLER) \
--set image.tag=$(CERT_MANAGER_TAG) \
--set webhook.image.repository=$(acrRegistry)/infrastructure/$(CERT_MANAGER_IMAGE_WEBHOOK) \
--set webhook.image.tag=$(CERT_MANAGER_TAG) \
--set cainjector.image.repository=$(acrRegistry)/infrastructure/$(CERT_MANAGER_IMAGE_CAINJECTOR) \
--set cainjector.image.tag=$(CERT_MANAGER_TAG) \
-f ingress/helm_certmanager_values.yml \
cert-manager ./cert-manager" -f .
output
v1.8.0: Pulling from reponame.azurecr.io/helm/infrastructure/jetstack/cert-manager
ERROR: Operation returned an invalid status 'Request Entity Too Large'
##[error]Bash exited with code '1'.
This is very inconvenient since limit is too small and it was working until 29th July 22. we do not want to download package directly over internet within az aks command invoke, neither want to keep our ACR registry publicly open in future.
@rcgokhale, are you even technically able to download a package within az aks command invoke
? The pod runs as non-root user on a distroless image. If you know how, please let me know
Related command
The deploycharts.sh looks like this:
and the contents of the folder attached have this structure:
Describe the bug
When executing this command to deploy our Prometheus chart (https://prometheus-community.github.io/helm-charts/kube-prometheus-stack:35.4.2) it fails with the following error:
The chart is not big (2.08 MB according to Windows Explorer). So what's the attachment size limitation for this command and how can we increase it?
To Reproduce
Pull the chart:
helm pull kube-prometheus-stack --repo https://prometheus-community.github.io/helm-charts --version 35.4.2 --untar --untardir <you-destination-folder>
Then deploy this chart to a kubernetes cluster with the default values using
az aks command invoke
and use the parameter -f '.' to attach the whole chart.Expected behavior
Chart deployment should succeed naturally without this error 'Request Entity Too Large'.
Environment summary
Azure Cli 2.32.0
We use Ubuntu images and Azure Container Instances (part of Ev2 Shell extensions https://ev2docs.azure.net/features/extensibility/shell/intro.html?q=adm-ubuntu-1804-l) to run these deployment scripts and deploy to our private AKS cluster. The images are Ubuntu 1804.
Additional context
In the past we've had to split the charts into parts to work around this.
We are a Microsoft team (jucallej@microsoft.com).