VictoriaMetrics / helm-charts

Helm charts for VictoriaMetrics, VictoriaLogs and ecosystem
https://victoriametrics.github.io/helm-charts/
Apache License 2.0
328 stars 322 forks source link

[victoria-metrics-k8s-stack] Missing `VMScrapeConfig` CRD error after upgrading to version 0.24.5 #1224

Closed mhkarimi1383 closed 1 week ago

mhkarimi1383 commented 4 weeks ago
{"level":"error","ts":"2024-08-15T07:43:43Z","logger":"controller-runtime.source.EventHandler","msg":"if kind is a CRD, it should be installed before calling Start","kind":"VMScrapeConfig.operator.victoriametrics.com","error":"no matches for kind \"VMScrapeConfig\" in version \"operator.victoriametrics.com/v1beta1\"","stacktrace":"github.com/go-logr/logr.Logger.Error\n\t/go/pkg/mod/github.com/go-logr/logr@v1.4.2/logr.go:301\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/source/kind.go:71\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/loop.go:87\nk8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/loop.go:88\nk8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel\n\t/go/pkg/mod/k8s.io/apimachinery@v0.30.2/pkg/util/wait/poll.go:33\nsigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.18.4/pkg/internal/source/kind.go:64"}

getting this error and also the operator crashes

the only related things about operator and crds in values are

victoria-metrics-operator:
  enabled: true
  operator:
    disable_prometheus_converter: false

crds:
  enabled: true

prometheus-operator-crds:
  enabled: true

List of installed CRDs by this chart

k get crd | grep -E 'victoriametrics|monitoring'
alertmanagerconfigs.monitoring.coreos.com              2024-08-15T07:41:22Z
alertmanagers.monitoring.coreos.com                    2024-08-15T07:41:23Z
podmonitors.monitoring.coreos.com                      2024-08-15T07:41:24Z
probes.monitoring.coreos.com                           2024-08-15T07:41:24Z
prometheusagents.monitoring.coreos.com                 2024-08-15T07:41:26Z
prometheuses.monitoring.coreos.com                     2024-08-15T07:41:28Z
prometheusrules.monitoring.coreos.com                  2024-08-15T07:41:28Z
scrapeconfigs.monitoring.coreos.com                    2024-08-15T07:41:29Z
servicemonitors.monitoring.coreos.com                  2024-08-15T07:41:29Z
thanosrulers.monitoring.coreos.com                     2024-08-15T07:41:30Z
vmagents.operator.victoriametrics.com                  2024-03-12T07:17:35Z
vmalertmanagerconfigs.operator.victoriametrics.com     2024-03-12T07:17:35Z
vmalertmanagers.operator.victoriametrics.com           2024-03-12T07:17:35Z
vmalerts.operator.victoriametrics.com                  2024-03-12T07:17:35Z
vmauths.operator.victoriametrics.com                   2024-03-12T07:17:35Z
vmclusters.operator.victoriametrics.com                2024-03-12T07:17:35Z
vmnodescrapes.operator.victoriametrics.com             2024-03-12T07:17:35Z
vmpodscrapes.operator.victoriametrics.com              2024-03-12T07:17:35Z
vmprobes.operator.victoriametrics.com                  2024-03-12T07:17:35Z
vmrules.operator.victoriametrics.com                   2024-03-12T07:17:35Z
vmservicescrapes.operator.victoriametrics.com          2024-03-12T07:17:35Z
vmsingles.operator.victoriametrics.com                 2024-03-12T07:17:35Z
vmstaticscrapes.operator.victoriametrics.com           2024-03-12T07:17:35Z
vmusers.operator.victoriametrics.com                   2024-03-12T07:17:35Z
f41gh7 commented 4 weeks ago

Looks like, manual CRD update is required, since helm cannot perform such update.

See upgrade guide https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-k8s-stack#upgrade-guide

mhkarimi1383 commented 4 weeks ago

@f41gh7 Thanks, running

helm show crds vm/victoria-metrics-k8s-stack --version 0.24.5 | kubectl apply -f - --server-side --force-conflicts

fixes the problem --force-conflicts was needed

Here is why

customresourcedefinition.apiextensions.k8s.io/vmscrapeconfigs.operator.victoriametrics.com serverside-applied
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 2 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
Apply failed with 3 conflicts: conflicts with "helmwave" using apiextensions.k8s.io/v1:
- .metadata.annotations.controller-gen.kubebuilder.io/version
- .spec.versions
- .spec.conversion.strategy
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts 
AndrewChubatiuk commented 1 week ago

it's expected, that you have conflict there as CRD was initially installed by helmwave and then upgraded by kubectl

mhkarimi1383 commented 1 week ago

@AndrewChubatiuk

I think this should be mentioned in Docs, Since most of the times CRDs are installed via Helm in the first place