Open tkohn opened 2 years ago
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Hey folks, have you had time to investigate the problem?
Hi @tkohn ,
Sorry for the delay.
I recommend using the values specified in the values.yaml file for these implementations instead of editing the deployment. You can use these settings in values.yaml
## @param extraVolumes Optionally specify extra list of additional volumes for Controller pods
##
extraVolumes:
- name: workdir
emptyDir: {}
## @param extraVolumeMounts Optionally specify extra list of additional volumeMounts for Controller container(s)
##
extraVolumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
## @param initContainers Add init containers to the controller pods
## Example:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers:
- name: install
image: busybox:1.28
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
For this run, you must indicate the values.yaml file:
helm upgrade --install --repo https://charts.bitnami.com/bitnami --set commonLabels.test=test nginx-ingress-controller nginx-ingress-controller -f values.yaml
Please try it and if you have any problems feel free to comment here. Thanks for your comments.
Hello @CeliaGMqrz
thanks for your answer.
The current problem is, that a Admission Webhook changes the deployment in the cluster. Your solution means, users have to create a diff of the deployment before and after the Admission Webhook and add the diff to the values file.
Do you have any other ideas to debug this Chart?
Because I want to understand what could trigger this issue. My helm chart example https://github.com/tkohn/example-issue-helm does not have this issue. But it could be possible that the issue is in the helm implementation and not in the bitnami/nginx-ingress-controller
chart.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Hi @tkohn
Sorry for the delay. I have been able to reproduce the issue. I will have to investigate further as this is a particular case. When I have an update I will let you know.
Thanks for your report.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Any updates?
Name and Version
bitnami/nginx-ingress-controller 9.3.3
What steps will reproduce the bug?
helm upgrade --install --repo https://charts.bitnami.com/bitnami nginx-ingress-controller nginx-ingress-controller
kubectl edit deployment nginx-ingress-controller
after
save the deployment and verify that the deployment contains the new volumeMounts
helm upgrade --install --repo https://charts.bitnami.com/bitnami --set commonLabels.test=test nginx-ingress-controller nginx-ingress-controller
to trigger a patch.Error: UPGRADE FAILED: cannot patch "nginx-ingress-controller" with kind Deployment: Deployment.apps "nginx-ingress-controller" is invalid: spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "workdir"
What is the expected behavior?
The helm upgrade command should not fail for the volumeMounts if mutating the admission webhook adds a volumeMount to the deployment.
What do you see instead?
The helm upgrade fails with the following error:
Additional information
At first, I thought this is a bug with helm. I created a small example project to re-create the issue: https://github.com/tkohn/example-issue-helm But my example chart does not fail like the nginx-ingress-controller.
My guess is, that a sub-chart or some logic in the chart is messing up the volumes.
I tested it with helm
3.9.4
and3.10.1
on Kubernetes1.21
,1.22
and1.23