kubernetes-retired / kubefed

Kubernetes Cluster Federation
Apache License 2.0
2.5k stars 531 forks source link

Helm chart 0.x.x to 0.9.x upgrade fails #1489

Closed tehlers320 closed 2 years ago

tehlers320 commented 2 years ago

What happened: The new settings in the chart CRD appear to be set before the CRD goes in

Error: error validating "": error validating data: ValidationError(KubeFedConfig.spec.controllerDuration): unknown field "cacheSyncTimeout" in io.kubefed.core.v1beta1.KubeFedConfig.spec.controllerDuration

What you expected to happen: upgrades work.

How to reproduce it (as minimally and precisely as possible): upgrade helm from 0.8.1 to 0.9.0

Anything else we need to know?:

I think this may fix it, not sure.

annotations:
  "helm.sh/hook": pre-install

on the CRD Environment:

/kind bug

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

jimmidyson commented 2 years ago

Could you please provide simplest repro steps?

ra-grover commented 2 years ago

Hey @jimmidyson , upgrading the helm chart from 0.8.1 to 0.9.2 you will face this error. helm upgrade -n kube-federation-system --set docker_tag=v0.9.2 kubefed .

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubefed/issues/1489#issuecomment-1215035391): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mimmus commented 1 year ago

Same issue. @tehlers320 were you able to solve this?