kubernetes-sigs / kustomize

Customization of kubernetes YAML configurations
Apache License 2.0
11.03k stars 2.25k forks source link

'helmCharts' generator doesn't re-pull a chart when updating the version attribute #3848

Closed ChristianCiach closed 1 month ago

ChristianCiach commented 3 years ago

Version:

{Version:kustomize/v4.1.2 GitCommit:a5914abad89e0b18129eaf1acc784f9fe7d21439 BuildDate:2021-04-15T20:38:06Z GoOs:linux GoArch:amd64}

Please have a look at this kustomization:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
  - name: traefik
    repo: https://helm.traefik.io/traefik
    version: 9.18.2
    releaseName: traefik

When running kustomize build --enable-helm on this directory, the tar-file gets pulled into ./charts/traefik-9.18.2.tgz and inflated into ./charts/traefik/. Nothing surprising so far.

To follow best practices (and because our CI system has no access to the internet), we commit the generated charts directory into version control.

Unfortunately, when someone updates the version 9.18.2 later to 9.18.3 and builds the project locally (with internet access), kustomize won't pull the new chart version and silently uses the old chart, because kustomize only checks for the existence of the directory ./charts/traefik.

I think kustomize should append the version to the directory name to circumvent this issue.

Well, actually, to be honest: I think it would be better if kustomize would only check for the existence of the tar-file instead of the inflated directory. I don't really want to commit the inflated chart directory ./charts/traefik/ to version control. I think it would be a lot cleaner to just commit the tar file ./charts/traefik-9.18.2.tgz to version control and let kustomize (or helm) inflate the file on every build.

I think it would be even better if we could ensure that pulls never happen by referencing a local tar file instead. Helm itself can template local tar files without issues:

$ helm template ./charts/traefik-9.18.3.tgz
---
# Source: traefik/templates/rbac/serviceaccount.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: traefik
  labels:
    app.kubernetes.io/name: traefik
    helm.sh/chart: traefik-9.18.3
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: RELEASE-NAME
  annotations:
---
.......

Please, let us use local tar-files as helmCharts!

k8s-triage-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

mikebz commented 3 years ago

@natasha41575 this might be addressed with your function based approach.

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

pvlg commented 3 years ago

It is also impossible to specify multiple versions. Only the first version will be used.

helmCharts:
  - name: traefik
    repo: https://helm.traefik.io/traefik
    version: 9.18.2
    releaseName: traefik
  - name: traefik
    repo: https://helm.traefik.io/traefik
    version: 10.0.0
    releaseName: traefik-other
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

Blackclaws commented 2 years ago

I think this in general is because the helm feature is a bit "tacked on" and doesn't really do any sort of package management per se. Its just convenience.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/3848#issuecomment-1146890240): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
vvatlin commented 2 years ago

What's the point to automatically close unresolved issues ?

vvatlin commented 2 years ago

/reopen

k8s-ci-robot commented 2 years ago

@vvatlin: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/3848#issuecomment-1146891933): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
QuinnBast commented 2 months ago

Well, actually, to be honest: I think it would be better if kustomize would only check for the existence of the tar-file instead of the inflated directory. I don't really want to commit the inflated chart directory ./charts/traefik/ to version control. I think it would be a lot cleaner to just commit the tar file ./charts/traefik-9.18.2.tgz to version control and let kustomize (or helm) inflate the file on every build.

I think it would be even better if we could ensure that pulls never happen by referencing a local tar file instead. Helm itself can template local tar files without issues:

Yes please!!! SO sad to see that this issue has been closed for 2 years... We just ran into this as we are trying to commit charts as we aren't going to have internet at our deployment site. We would love to be able to tell kustomize we are using upstream charts, but that it should pull locally if they exist first. Right now, this is done by just committing the entire /charts directory, which is a disaster and is bloating our repository. If we could just commit the tgz it would be sooo much cleaner.

I tried just keeping the tgz in my repo, but running a kustomize build while just the tgz exists returns this error:

Error: Error: failed to untar: a file or directory with the name /home/almalinux/Documents/repository/infrastructure/kubernetes/deployments/kafka/testbed/charts/strimzi-kafka-operator-0.41.0/strimzi-kafka-operator-helm-3-chart-0.41.0.tgz already exists : unable to run: 'helm pull --untar --untardir /home/almalinux/Documents/repo/infrastructure/kubernetes/deployments/kafka/testbed/charts/strimzi-kafka-operator-0.41.0 --repo https://strimzi.io/charts strimzi-kafka-operator --version 0.41.0' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-1068517713/helm HELM_CACHE_HOME=/tmp/kustomize-helm-1068517713/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-1068517713/helm/.data] (is 'helm' installed?): exit status 1

@natasha41575 maybe?

a7i commented 2 months ago

/reopen

k8s-ci-robot commented 2 months ago

@a7i: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/3848#issuecomment-2311251296): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
k8s-ci-robot commented 2 months ago

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/kustomize/issues/3848#issuecomment-2375442505): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.