Closed gberche-orange closed 2 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Bug Report
What happened:
Each time a
helm upgrade
command is run, the job migration from service catalog 0.2.0 to 0.3.0 is run, creating controller downtime as the following trace of the migration job output showsWhat you expected to happen:
The migration job should only trigger if the current installation is still running a 0.2.x version.
The helm builtin values do not seem to provide the version of the current release when Release.IsUpgrade=true
https://helm.sh/docs/topics/charts/
The restore job should therefore check whether it needs to run such as
Alternatively, a helm chart migration opt-in or migration opt-out value should easily enable helm template to disable the pre and post migration jobs in https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/templates/pre-migration-job.yaml and https://github.com/kubernetes-sigs/service-catalog/blob/master/charts/catalog/templates/migration-job.yaml
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
This relates to #2853
Environment:
kubectl version
):helm upgrade