helm / helm

The Kubernetes Package Manager
https://helm.sh
Apache License 2.0
26.83k stars 7.09k forks source link

helm template --namespace should set the 'metadata.namespace' field on created resources #3553

Closed munnerz closed 5 years ago

munnerz commented 6 years ago

I'm currently using helm template to generate static manifests from our authoritative helm chart in the cert-manager project. You can see the script that does this here: https://github.com/jetstack/cert-manager/blob/master/hack/update-deploy-gen.sh.

Currently, despite specifying --namespace, the namespace field is not set on the generated resources. I'm aware that I could deploy with -n namespace, however for a cleaner experience for end-users, it'd be preferable to not have to include this step.

In the meantime, I will likely add namespace: {{ .Release.Namespace }} to each of my namespaced resources, however it'd be ideal if helm template itself could do this (as from what I understand, the best practice with helm chart is to not include the metadata.namespace field at all, and let helm/tiller manage namespace selection).

Is this the recommended approach, and would it be conceivable to change the behaviour of helm template to do this for us automatically?

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 6 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale

bbetter173 commented 6 years ago

Please don't close this issue - it impacts us quite heavily as having to fork charts just to add the namespace metadata is arduous.

deltaroe commented 6 years ago

It's a hack, but you can pass the output of helm template through this to add the namespace. https://gist.github.com/deltaroe/63afd52ba84274ed5b86ba9b0c357e8f

helm template -f values.yaml -n NAME CHART | ./add-ns.py cool-namespace | kubectl apply -f-

daviddyball commented 5 years ago

Would it be safe to assume that this issue won't be looked into until Helm 3 is released?

Tronix117 commented 5 years ago

Using the hack of @deltaroe don't forget to pass --namespace cool-namespace to the helm template too. Some charts providers have started adding the namespace key themselves due to this issue.

bacongobbler commented 5 years ago

From Helm's perspective, this doesn't really make sense for a couple reasons:

I'm aware that I could deploy with -n namespace,

That's the main problem I have with this proposal. This isn't even a convenience change. Kubernetes resources do not need the namespace parameter to be set in order to be deployed in a particular namespace. The output of helm template can be applied to a particular namespace via kubectl apply --namespace foo, and from the API server's perspective that is the standard practice. kubectl and other tools in the Kubernetes ecosystem all work with resources without the namespace parameter present as it is an optional field.

My biggest concern is by adding the namespace parameter to a resource for helm template, the output differs to the manifests Helm supplies to the Kubernetes API on helm install or helm upgrade (no namespace parameter), breaking the rule of least surprise. helm template was designed to display the output of what helm install will provide to the API server.

My other concern with this proposal is that it would be incredibly computationally expensive to implement as we would need to:

  1. render each and every resource being created from an Unstructured object (Kubernetes' internal representation of a JSON blob) into a serialized object (a v1.Deployment) BEFORE it's sent off to the cluster
  2. determine if the resource is a namespaced object, and that it does not already have a namespace parameter set (cluster resources like CRDs are not namespaced)

It also defeats one of Helm's basic design goals: don't mess with the user's templates. By injecting metadata into the objects, we (once again) break the rule of least surprise.

By implementing this, the end result would be a noticable performance degredation for all users of Helm with no gain in functionality. Please continue to use kubectl's --namespace parameter or look into how the Kubernetes API installs non-namespaced resources into a particular namespace. Thanks.

mingfang commented 5 years ago

I don't agree. If the --namespace is optional then the user setting that option knows what they are doing. Do not assume you know better than the user.

alexpkalinka commented 5 years ago

we break the rule of least surprise when rendering templates

Actually, I was very surprised by CURRENT behavior. I expected helm template to give me a template that represents the resource to be created by the corresponding helm install. Basically, I expected helm template to be the same as helm install, except rendering the template to stdout instead of applying it right away.

Because how else should I get this final template? For example, helm install --dry-run --debug does not output namespace field too.

Also, if --namespace doesn't add namespace, then what does it do? In the documentation of helm template it says:

--namespace string         Namespace to install the release into

But the resulting YAML will not be installed into the specified namespace. It will be installed into the default namespace.

Helm does not gain any added functionality by doing this (in fact, Helm becomes slower to render these resources)

It will gain added functionality: users will be able to get a final template, with a proper namespace. Currently, users resort to using some hacks/scripts, like mentioned above, to add this functionality.

That's the main problem I have with this proposal. This isn't even a convenience change. Kubernetes resources do not need the namespace parameter to be set in order to be deployed in a particular namespace.

Just because they don't need that parameter, does not mean that that parameter must always be excluded. Yes, namespace is optional, but it also can be specified explicitly in a template.

The output of helm template can be applied to a particular namespace via kubectl apply --namespace foo, and from the API server's perspective that is the standard practice.

If users want a template without namespace then they can omit the --namespace parameter. But if they want the rendered template with the namespace they should be able to get it with --namespace.

kubectl and other tools in the Kubernetes ecosystem all work with resources without the namespace parameter present as it is an optional field.

They can work without namespace, but they also can work with namespace. And sometimes embedding namespace into YAML is more preferable, because it allows you to not specify --namespace to kubectl.

For example, assume you have several components installed into different namespaces. If YAML-files for those components have no namespace fields in them then you need to remember a namespace for each resource to be able to use kubectl edit -f.

My biggest concern is by adding the namespace parameter to a resource for helm template, the output differs to the manifests Helm supplies to the Kubernetes API on helm install or helm upgrade (no namespace parameter), breaking the rule of least surprise.

That's not the problem with helm template. That's a problem with helm install. Currently helm install ignores namespace field for some reason, "breaking the rule of least surprise".

helm template was designed to display the output of what helm install will provide to the API server.

Maybe it should be better communicated in the documentation? Because I never would've guessed that helm template works the way you described just by reading the docs.

AndiDog commented 5 years ago

Here's a workaround for everyone to insert metadata:namespace. Should cover the typical use cases. I'm using it to generate a fully rendered manifest including namespaces. Note that --namespace is recommended because some charts use .Release.Namespace to refer between objects. --width 99999 is optional, but gives nicer output for long, multi-line string values IMO. Tested with pip --quiet install yq==2.7.2.

release_namespace=xyz
helm template --namespace "${release_namespace}" ... \
    | yq --width 99999 --yml-output -s ".[] | select(. != null) | . * (if .kind != \"Namespace\" then {\"metadata\": {\"namespace\": \"${release_namespace}\"}} else {} end)"
kav commented 4 years ago

Adding from my accidental duplicate as well:

Any chance to at least fail loudly? All of the helm command line help implies this will work. You've claimed previously that implementing this feature violates "least surprise" but I am telling you right now as a user, rather than a core contributor, I was very surprised. I used -n to generate the yaml and expected the namespace to be "saved". I handed it off to a user without helm and they promptly created a bunch of junk in the default namespace.

We, as regular old users, were definitely surprised

Diaphteiros commented 4 years ago

I was about to open an issue for this behaviour when I found this one. I can only agree that the --namespace option should set metadata.namespace for rendered manifests.

The way it works right now is strongly counter-intuitive in my opinion: some resources might reference other resources, thus use {{ .Release.Namespace }} somewhere in their spec. In order to have my application work, I have to use the --namespace option on helm template, which results in manifests where resources are referenced with namespaces but the manifests of the referenced resources don't have a namespace set. The rendered manifests are inconsistent with themselves, unless the same namespace is given to kubectl apply. Meanwhile, if I use helm install instead, everything works fine. So, neither

helm template charts/foo --namespace foo | kubectl apply -f -

nor

helm template charts/foo | kubectl apply --namespace foo -f -

are equivalent to

helm install charts/foo --namespace foo foo

This was totally unexpected and very surprising to me. I would have expected that the first command listed above yields the same results as the helm install one, which would be the case if the namespace was rendered into metadata.namespace.

diegosucaria commented 4 years ago

I wonder why this issue is closed... the issue is still there.

FrediWeber commented 4 years ago

We also have the same issue. I don't agree, that the presence of the namespace field in the metadata is unnecessary. When you manage bigger Kubernetes Clusters (I hope) you don't really manage them manually by applying resources with kubectl apply. If you're working with GitOps for example, it is extremely important to get a final, correct resource out of the helm template command, especially when you specify the --namespace parameter.

@bacongobbler I don't know why you closed this issue but as you can see, there are a lot of users who want helm to correctly render resources so please reopen it. I'm sure there is a middle way for users, who really need a complete resource out of helm (e.g. a new command line option etc.).

bacongobbler commented 4 years ago

I wonder why this issue is closed... the issue is still there.

because of the reasons I stated above.

I expected helm template to give me a template that represents the resource to be created by the corresponding helm install

I don't agree, that the presence of the namespace field in the metadata is unnecessary.

helm template gives you the exact template helm install receives. I'll try to explain it more clearly.

A simple example demonstrating what I meant:

><> cat job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: hello
spec:
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
      restartPolicy: OnFailure
><> kubectl create namespace foo
namespace/foo created
><> kubectl create -f job.yaml --namespace foo
job.batch/hello created
><> kubectl get jobs --namespace foo
NAME    COMPLETIONS   DURATION   AGE
hello   0/1           9s         9s

Notice that job.yaml has no metadata.namespace parameter set. The template is passed to kubernetes, and kubectl tells kubernetes to install it in the foo namespace. Helm mimics the same behaviour here.

So yes, the namespace parameter is completely unnecessary for both kubectl and helm install to work.

The manifest does not need to have its namespace parameter set to be installed in a particular namespace, and this is how helm install works. This is also how kubectl apply works. Why should helm template accommodate this.

bacongobbler commented 4 years ago

If you're working with GitOps for example, it is extremely important to get a final, correct resource out of the helm template command, especially when you specify the --namespace parameter.

This would be a flaw of the system not accepting the same inputs as Kubernetes. I'd ask you to go back to that project and ask them to implement this functionality. helm template is displaying the correct output that helm install and kubectl apply expect. This is not a design flaw from Helm's perspective.

FrediWeber commented 4 years ago

No worries, I‘ve changed the relevant Helm charts to accept the namespace parameter themselves. The only thing I’m still wondering about is how the GitOps system that manages the hole Kubernetes cluster with many namespaces, should automatically detect which ressources to put in which namespace. One very important aspect in the GitOps paradigm is that ressources are defined explicitly and exhaustingly so you have less surprises and a more rigid control via your VCS.

icy commented 4 years ago

This would be a flaw of the system not accepting the same inputs as Kubernetes. I'd ask you to go back to that project and ask them to implement this functionality. helm template is displaying the correct output that helm install and kubectl apply expect. This is not a design flaw from Helm's perspective.

I slightly modify your test as below, and use them with kustomization. The manifest test.yaml includes your original example, and a copy of that for another namespace. For this one, I can't specify namespace in the kustomization.yaml (as kustomize will render two identical manifests) nor kubectl: I can't apply/diff the template correctly because one of them will generate namespace conflict

$ kustomize build| kubectl diff --nanamespace prod -f-
error: the namespace from the provided object "metrics" does not match the namespace "prod". You must pass '--namespace=metrics' to perform this operation.

$ cat test.yaml | kubectl apply --namespace prod -f-
error: the namespace from the provided object "metrics" does not match the namespace "prod". You must pass '--namespace=metrics' to perform this operation.

For a single manifest I think you can specify --namespace option for kubectl, but if you have a stream of manifests with different namespaces, that's a problem.

There is a way to modify the original template to include namespace. But I don't really get why we do have helm template --namespace foo that doesn't generate any namespace output for me. (If I need a variable to be used in my template, I already know the way.) Yes, it's surprising feature to me. (But thanks to that, I would find another issue of kustomization: https://github.com/kubernetes-sigs/kustomize/issues/2995 )

kustomization.yaml

resources:
- test.yaml

test.yaml

apiVersion: batch/v1
kind: Job
metadata:
  name: hello
spec:
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
      restartPolicy: OnFailure

---
apiVersion: batch/v1
kind: Job
metadata:
  name: hello
  namespace: metrics
spec:
  template:
    spec:
      containers:
      - name: hello
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
      restartPolicy: OnFailure
icy commented 4 years ago

So yes, the namespace parameter is completely unnecessary for both kubectl and helm install to work.

When I don't specify namespace in my manifest /template file (test.yaml) , I believe helm install --namespace foo will install my things into the namespace foo, as I expected.

Now if I render the template, helm template --namespace foo, and use that output with kubectl apply, as seen in my previous comment, I have no way to tell kubectl apply to do correctly (unless I have to fix the original template.) See also @Diaphteiros' comment https://github.com/helm/helm/issues/3553#issuecomment-603826152.

Please fix me if I'm wrong. Many thanks.

icy commented 4 years ago

@bacongobbler Can you please have a look at my comments? The output of helm template can't be used directly by kubectl apply command if I don't modify the original template to specify namespace explicitly. I don't know how helm install --namespace foo work in details, but I am sure I can't use the same way for kubectl: I meant, there are quite a lot of differences.

In your comment, https://github.com/helm/helm/issues/3553#issuecomment-649071246, the example is good but that's only applicable for a single document yaml file.

Many thanks.

dfang commented 3 years ago

It's a global flag in helm template -h, and helm template traefik traefik/traefik --namespace traefik and helm template traefik traefik/traefik output the same yaml.

Is this REALLY a global flag ?

 λ  helm template -h

Render chart templates locally and display the output.

since helm template is to render the template chart locally, it's safe to overwrite, i know helm template xxx | kubectl apply -n foo -f - can install to foo namespace. but what if a user just want the results of helm template with the passed namespace name ?

maybe add a flag --set-namespace to helm template subcommand ?

vsfedorenko commented 3 years ago

Oneliner solution/hotfix with CRDs support

  NAMESPACE=argocd \
    && printf "apiVersion: kustomize.config.k8s.io/v1beta1\nkind: Kustomization\nresources: [all.yaml]"> kustomization.yaml \
    && (helm dep build 2> /dev/null || helm dep update) \
    && helm template . --name-template argocd --namespace ${NAMESPACE} --include-crds > all.yaml \
    && (kubectl create namespace ${NAMESPACE} || true) \
    && kubectl -n ${NAMESPACE} -k \
    && rm kustomization.yaml \
    && rm all.yaml
shibumi commented 3 years ago

Sad to see that Helm closed this with "wontFix".

Helmfile has added a workaround for this:

releases:
- name: "aws-load-balancer-controller"
  namespace: "kube-system"
  forceNamespace: "kube-system"

generating YAMLs with helmfile adds the metadata.namespace variable for all resources.

icy commented 3 years ago

Sad to see that Helm closed this with "wontFix".

I solved the problem by not using Helm. A lot better ;)

WesDowney commented 3 years ago

Looks like it works now with: helm -n mynamespace template It populated my .Release.Namespace value appropriately in the template.

PauloGDPeixoto commented 3 years ago

@bacongobbler I'm sorry but your replies are not satisfactory for a project of this dimension. Just because it doesn't fit your usecase, it doesn't mean that dozens of others can't find this feature useful. In my case, we're using helm template to render the templates which will be then managed by argocd, if helm template does not allow for the namespace to be passed as an argument, there is no way to tell argocd to install these manifests to a specific namespace. Now, you can ask why am I doing it this way, or you can argue that this is not the most functional way to make things work, from your perspective, but the thing is: this is how I have to use this, and I will sooner drop helm than change a pipeline that consumes helm, yaml, jsonnet, kustomize, etc... This issue is more than 3 years old, I can only assume that this is not being implemented due to stubborness.

afirth commented 3 years ago

@bacongobbler I'm sorry but your replies are not satisfactory for a project of this dimension. Just because it doesn't fit your usecase, it doesn't mean that dozens of others can't find this feature useful. In my case, we're using helm template to render the templates which will be then managed by argocd, if helm template does not allow for the namespace to be passed as an argument, there is no way to tell argocd to install these manifests to a specific namespace. Now, you can ask why am I doing it this way, or you can argue that this is not the most functional way to make things work, from your perspective, but the thing is: this is how I have to use this, and I will sooner drop helm than change a pipeline that consumes helm, yaml, jsonnet, kustomize, etc... This issue is more than 3 years old, I can only assume that this is not being implemented due to stubborness.

Hey @PauloGDPeixoto - don't be a jerk. Matt has been tirelessly maintaining this project and ultimately the decision to implement a feature rests with the code owners. In this particular case, I think you'll find helm template -n <namespace> does what you want now, and otherwise you can use the workarounds above or apply kustomize patches easily either in your pipeline or with helm, for example using something like:

    - op: replace
      path: /metadata/namespace
      value: newnamespace
target:
  labelSelector: mylabel=foo

https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/

Anyway, be kind :)

PauloGDPeixoto commented 3 years ago

I'm sorry if I was too blunt, but this tendency among developers of not being able to empathize with others is utterly annoying. FYI, I tested your suggestion and it didn't work, I have no namespace field on my rendered templates. And I do realize that there are workarounds, but does it really make sense to jump through hoops for something that should be simple? Anyway, I don't mean to be disrespectful, I would just like to see people being more mindful of the different uses that others find for the tools they create.

icy commented 3 years ago

I just want to add a note, that I already added in my previous comment. Looked like the simple case is fine, but more complex situation may yield a different /unexpected result [1]. Tbn, that's confusing option flag, and I don't really get how that means now.

[1] https://github.com/helm/helm/issues/3553#issuecomment-694465293

bacongobbler commented 3 years ago

At the end of the day, it is up to the maintainers to decide what use cases they wish to support and which ones they don't. That is how open source works.

I explained very clearly in an earlier comment why it does not make sense for Helm to include this functionality as it breaks multiple assumptions we've made in the core APIs.

If you disagree with the maintainer's decisions, there's a button for that:

2021-07-27-12:17:49

icy commented 3 years ago

@bacongobbler Honestly I'd like an answer for this [1] which is a counter-example of [2]. I haven't looked at how our core api is working; but I think there may be some thing to be clarified.

It's still Helm 3. I used helm 2 before. Maybe there would be better in the future. So here we discuss the things. I think that's also a way open source works.

[1] https://github.com/helm/helm/issues/3553#issuecomment-694465293 [2] https://github.com/helm/helm/issues/3553#issuecomment-649071246

bacongobbler commented 3 years ago

Your example demonstrates resources requesting to be deployed in one namespace, but you specify a different namespace with kubectl apply --namespace. kubectl checks for this and fails if the results differ between the two via the EnforceNamespace option. Which I believe is enabled when you override the --namespace parameter.

https://github.com/kubernetes/kubectl/blob/3ffa097df91ed3609aeb64d44acfb4da8e7d05b6/pkg/cmd/apply/apply.go#L361-L364

Many users might call this expected behaviour. You asked resources to be installed in one place, while they attempted to be installed in another, resulting in an error.

Perhaps they would be amenable to a pull request to disable that EnforceNamespace feature if --enforce-namespace=false was set or something like that. But I'm not confident as the current behaviour prevents bad actors from installing resources in namespaces the users did not ask for. Which is a GOOD thing.

Compared to kubectl, Helm does not have strict enforcement enabled. We tried to do that for Helm 3 but had to back out as people were deploying apps across multiple namespaces, breaking a long-standing assumption in Helm 2 that charts should be restricted to a namespace. Enforcing it meant breaking user's charts, which we avoided with the Helm 3 update. So we had to remove that constraint in Helm 3.0.0-alpha.2.

https://github.com/helm/helm/commit/b5d555e4eafca85671ccbd9083cc8f811c9560b3#diff-137bba79988de442c1fde0fc29520d6d7e5a5239ceac82f5439a5bf513ebb2d4

PauloGDPeixoto commented 3 years ago

Something that's hard for me to comprehend is the preocupation with kubectl apply. Shouldn't the main focus be in installing the charts with helm? If you're running helm template to install the manifests, shouldn't that be out of scope for this application?

bacongobbler commented 3 years ago

When we allow others to build upon prior work, we often improve upon it and take our industry (and society) in new directions.

helm template runs the chart and its templates through Helm's rendering engine, displaying the output on standard output. It says so on the tin.

><> helm help template | head -n 2

Render chart templates locally and display the output.

It just so happens that the rendered output is a bunch of Kubernetes YAML files in plain text form. Which kubectl apply accepts.

Putting it another way... Helm's core can be extended and re-consumed by other programs in the Kubernetes ecosystem through helm template.

There are other core pieces of Helm's architecture other than the install/upgrade/uninstall system. There's the chart repository system and the chart dependency subsystem, for example.

helm template provides a way to use these other parts of Helm and integrate them into their own projects.

So, to answer your question... No. We don't officially provide technical support for "How do I...?" types of questions with helm template | kubectl apply.

But we recognize that many users DO take the output of helm template to integrate parts of Helm into their projects in new and exciting ways. And we're happy to encourage that to improve upon what's been built by others in the community.

icy commented 3 years ago

@bacongobbler Thanks a lot for your time on the excellent comment [1]. It's very clear how the problem/confusion occurs now, and that's helpful to live with some work-around in reality.

@PauloGDPeixoto helm template | foo is very helpful and fast (at least from my work experience). Here foo can be anything, and kubectl apply is one of them.

[1] https://github.com/helm/helm/issues/3553#issuecomment-888566296

johnnyrun commented 3 years ago

Hi. Argocd user here, that has helm support. Argocd uses "helm template" in order to render yaml to deploy, and track it.

as @icy said: kubectl is only one program having issues with namespaces. https://github.com/argoproj/argo-cd/blob/9476ab5e18be34a8cfc81150c94d08eefe3cee25/util/helm/helm.go#L64

mhubig commented 2 years ago

Hi there, new helm user here. I was very surprised on how helm template handles the --namespace parameter. See kubernetes-sigs/external-dns#2440 for details. I think there should be at leased a dedicated section within the helm documentation on the best practice of how to use namespaces with helm.

But on the other hand why not making it explicit:

  1. The --namespace option could just set the .Release.Namespace variable. Nothing more, nothing fancy. If one needs namespace support within his Chart, he can just make use of this handy variable.
  2. Remove the --create-namespace flag. If you need to create a namespace within your chart, just add a namespace object utilizing the .Release.Namespace variable.
  3. Write a best practice guide on how to use the .Release.Namespace variable within your chart to support setting the namespace with --namespace.

I somehow think it's simpler to implement this approach then to write a good documentation for the current behavior ...

deefdragon commented 2 years ago

I WAS setting up a process to store rendered templates versioned such that I can validate any changes to the values as well as any changes to the helm in one location. The fact that namespaces are not saved in the templates took me several hours to debug and working with helm was a waste of my time.

This was closed for a ridiculous reason. The yaml files output from helm template should represent the EXACT state that helm install would create. Not "the state that would be applied if kubectl uses the same arguments". Otherwise whats the point of having those options on template to begin with? Or the point of templating at all? They are literally irrelevant for my use case now, and I have to redesign my process.

Either add them to the template, or remove them as arguments (crash if set). I'm for the former because that option makes actual sense as to what templates are supposed to do, but at least pick one.

MichaelHindley commented 2 years ago

We are running into major headaches with helm template not working as expected with diverse namespace parameters. In our case, we have a GitOps pattern and we render out helm templates in order to perform reconciliation, the issue is that the metadata.namespace field is not respecting the namespace parameter as I think we and many others would find intuitive, mostly because it differs from helm install but it does so on locally provided input, not on cluster data.

mlkmhd commented 2 years ago

I did it with yq and kubectl-slice tools:

#!/bin/bash

helm template -n $KUBERNETES_NAMESPACE myproject ./helm-chart > all.yaml

kubectl-slice --input-file=all.yaml --output-dir=manifests

for filename in manifests/*; do
    yq -i --yaml-output ".metadata |= ({namespace: \"$KUBERNETES_NAMESPACE\"} + .)" $filename
done
bernardgut commented 5 months ago

why is this closed ? The documentation makes it seem like helm template and helm install are functionally equivalent while clearly, they are not.

This is clearly not a "feature" : Countless downstream projects and users are wasting time because of this, as you can obviously see with the trail of links pointing to this issue above.

kuisathaverat commented 3 months ago

It also affects the command install in debug mode. The manifest generated does not honor the input parameters. The following command will generate a different output than the resources created. So, you are not debugging the correct manifest.

helm install --dry-run --debug --generate-name --namespace foo .

workaround to generate the correct manifest using helm template

NAMESPACE=foo
HELM_CHART=.

helm template --generate-name --namespace "${NAMESPACE}" "${HELM_CHART}"|kubectl apply -n "${NAMESPACE}" --dry-run=client --output=yaml -f - | python -c "import sys, yaml; print('---\n'.join(yaml.safe_dump(item) for item in yaml.safe_load(sys.stdin)['items']))" 
divramod commented 3 months ago

oddly it is working when you run it on a remote chart like this:

helm template -f values.yaml --namespace test-helm --version 19.6.0 bitnami/redis

but not, if you run it on a local chart

Vivida1 commented 2 months ago

Pretty sure this used to work but now it seems broken.

f2calv commented 2 months ago

oddly it is working when you run it on a remote chart like this:

helm template -f values.yaml --namespace test-helm --version 19.6.0 bitnami/redis

but not, if you run it on a local chart

TLDR: If the namespace doesnt get set then your chart is badly constructed, so add in namespace: {{ .Release.Namespace }} to the metadata section of all your charts and it'll be set correctly OR create a more elaborate helper as detailed below...


I was under the impression "helm template --namespace abc123" would auto-magically add in "namespace: abc123" into my charts, however this isn't the case, you need to construct the chart correctly. The Redis example you have highlighted has a helper template located in the bitnami-common library chart to add in the Namespace correctly;

{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts.
*/}}
{{- define "common.names.namespace" -}}
{{- default .Release.Namespace .Values.namespaceOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

The above helper ensures you refer to the .Release.Namespace or other default values to set the namespace as required, usage in the Redis chart is as follows;

{{- if (include "redis.createConfigmap" .) }}
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ printf "%s-configuration" (include "common.names.fullname" .) }}
  namespace: {{ include "common.names.namespace" . | quote }}

Note: above tested with 3.14.3