argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
17.72k stars 5.4k forks source link

unsupported HPA GVK: autoscaling/v2 #9145

Open marcio-pessoa opened 2 years ago

marcio-pessoa commented 2 years ago

Checklist:

When I try to create an app using HPA apiVersion: autoscaling/v2beta2 the follow error is returned:

ComparisonError

unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler

SyncError

Failed sync attempt to fdc308aba08fbf3fc08b6d2870fc9acb70d9f09b: ComparisonError: unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler (retried 5 times).

To Reproduce

I used a typical HPA definition file:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: test
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: test
  minReplicas: 1
  maxReplicas: 2
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50

If I use apiVersion: autoscaling/v2beta2 Argo CD works perfectly. But this apiVersion is deprecated in Kubernetes v1.23+ and unavailable in v1.26+.

IMPORTANT: When I apply the above definition using kubectl works fine (I'm using Kubernetes v1.23.5).

Expected behavior

I would like to apply HPA definition using apiVersion: autoscaling/v2.

Version

$ kubectl exec -it -n argocd deployment/argocd-server -- argocd version
argocd: v2.3.3+07ac038
  BuildDate: 2022-03-30T00:06:18Z
  GitCommit: 07ac038a8f97a93b401e824550f0505400a8c84e
  GitTreeState: clean
  GoVersion: go1.17.6
  Compiler: gc
  Platform: linux/amd64
FATA[0000] Argo CD server address unspecified           
command terminated with exit code 1
oswald0071 commented 2 years ago

Is there any progress or workaround?

marcio-pessoa commented 2 years ago

The current workaround is to apply the definition files using kubectl, but it's not the Argonic way. :-(

pjaak commented 2 years ago

Currently running into the same issue

sanglt commented 2 years ago

We running into the same issue

ledroide commented 2 years ago

Here is my workaround, using a kustomization.yaml :

patches:
  - path: hpa.argocd-fix.patch.yaml
    target:
      group: autoscaling
      version: v2
      kind: HorizontalPodAutoscaler

hpa.argocd-fix.patch.yaml :

- op: replace
  path: /apiVersion
  value: autoscaling/v2beta2

The v2beta2 is very similar to final v2, so it should work without an other patch - but if it happens however, just add your additional patches to the same file. As soon as the bug in ArgoCD is fixed, it is easy to remove the patch. My two cents.

sanglt commented 2 years ago

Here is my workaround, using a kustomization.yaml :

patches:
  - path: hpa.argocd-fix.patch.yaml
    target:
      group: autoscaling
      version: v2
      kind: HorizontalPodAutoscaler

hpa.argocd-fix.patch.yaml :

- op: replace
  path: /apiVersion
  value: autoscaling/v2beta2

The v2beta2 is very similar to final v2, so it should work without an other patch - but if it happens however, just add your additional patches to the same file. As soon as the bug in ArgoCD is fixed, it is easy to remove the patch. My two cents.

No v2beta2 is not similar to v2 - Which is very strange. The ContainerResource is not supported on v2beta2. For our use case, we need to use ContainerResource because few difference type of sidecar container will inject to the deployment.

EppO commented 2 years ago

in 1.23, using or patching autoscaling/v2beta2 doesn't work, because when argocd gets the resource kubernetes will use the highest API version available for autoscaling which is autoscaling/v2.

Here are the manifests using argocd app manifests

---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app: myapp
    app.kubernetes.io/instance: myapp-dev
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: 1.1.3
    argocd.argoproj.io/instance: myapp-dev
    helm.sh/chart: myapp-1.1.3
    release: myapp-dev
  name: myapp
  namespace: myapp-dev
spec:
...

Here is what you get if you don't specify any API version (kubectl get hpa -o yaml)

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"app":"myapp","app.kubernetes.io/instance":"myapp-dev","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"myapp","app.kubernetes.io/version":"1.1.3","argocd.argoproj.io/instance":"myapp-dev","helm.sh/chart":"myapp-dev-1.1.3","release":"myapp-dev"},"name":"myapp","namespace":"myapp-dev"},"spec":{"maxReplicas":2,"metrics":[{"resource":{"name":"cpu","target":{"averageUtilization":80,"type":"Utilization"}},"type":"Resource"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"myapp"}}}
  creationTimestamp: "2022-01-27T14:14:36Z"
  labels:
    app: myapp
    app.kubernetes.io/instance: myapp-dev
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: 1.1.3
    argocd.argoproj.io/instance: myapp-dev
    helm.sh/chart: myapp-dev-1.1.3
    release: myapp-dev
  name: myapp
  namespace: myapp-dev
  resourceVersion: "103570142"
  uid: 15a38d49-1143-4e6f-92ad-67857e71b022
spec:
...

and Argo CD doesn't like that:

CONDITION        MESSAGE                                                                                                                                                                 LAST TRANSITION
ComparisonError  unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler                                                                                                       2022-05-08 08:03:36 -0700 PDT
SyncError        Failed sync attempt to ed1c3dfad4fd44553428029c1fbcfe3ced975b19: ComparisonError: unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler (retried 5 times).  2022-05-09 09:50:49 -0700 PDT
EppO commented 2 years ago

I opened a MR in gitops-engine repository: https://github.com/argoproj/gitops-engine/pull/411

joskuijpers commented 2 years ago

Great that a patch has been created and merged! Thank you for your work.

What is the timeline on the release of 2.4 to contain this change?

EppO commented 2 years ago

gitops-engine v0.7.0 was released with the fix. Argo CD v2.4.0 is using gitops-engine v0.7.0: https://github.com/argoproj/argo-cd/blob/v2.4.0/go.mod#L12

So hopefully, this should be fixed with the latest version of Argo CD.

EppO commented 2 years ago

I confirm autoscaling/v2 resources are synced correctly with Argo CD v2.4.0. This ticket can be closed

marcio-pessoa commented 2 years ago

Yes! The issue was fixed. Congrats @EppO !

srclark213 commented 2 years ago

Hey, I'm still seeing this issue on ArgoCD version 2.4.14 and kubernetes version 1.24. We have the latest argo charts (as of writing 5.6.0) deployed to our cluster and with

apiVersionOverrides:
    autoscaling: autoscaling/v2beta2

the HPAs are happy, but if I clear this out or set it to autoscaling/v2, I receive the unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler error on the 2 HPAs in ArgoCD.

output from argocd version:

argocd: v2.4.14+029be59
  BuildDate: 2022-10-05T17:15:37Z
  GitCommit: 029be590bfd5003d65ddabb4d4cb8a31bff29c18
  GitTreeState: clean
  GoVersion: go1.18.7
  Compiler: gc
  Platform: linux/amd64

output from kubectl version:

clientVersion:
  buildDate: "2022-09-21T13:19:24Z"
  compiler: gc
  gitCommit: b39bf148cd654599a52e867485c02c4f9d28b312
  gitTreeState: clean
  gitVersion: v1.24.6
  goVersion: go1.18.6
  major: "1"
  minor: "24"
  platform: windows/amd64
kustomizeVersion: v4.5.4
serverVersion:
  buildDate: "2022-09-21T21:46:51Z"
  compiler: gc
  gitCommit: b39bf148cd654599a52e867485c02c4f9d28b312
  gitTreeState: clean
  gitVersion: v1.24.6
  goVersion: go1.18.6
  major: "1"
  minor: "24"
  platform: linux/amd64

Are there any pieces to this puzzle that I'm missing here?

marcio-pessoa commented 2 years ago

Wow! Could you please kindly share the definition files and Argo CD error message?

srclark213 commented 2 years ago

Sure. So we're using the argo-helm charts on 5.6.0 here (https://github.com/argoproj/argo-helm/tree/argo-cd-5.6.0), which result in this definition file for the repo-server hpa:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  labels:
    app.kubernetes.io/component: repo-server
    app.kubernetes.io/instance: argo-cd
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: argocd-repo-server-hpa
    app.kubernetes.io/part-of: argocd
    argocd.argoproj.io/instance: argo-cd
    helm.sh/chart: argo-cd-5.6.0
  name: argo-cd-argocd-repo-server-hpa
  namespace: argocd
spec:
  maxReplicas: 5
  metrics:
    - resource:
        name: memory
        target:
          averageUtilization: 50
          type: Utilization
      type: Resource
    - resource:
        name: cpu
        target:
          averageUtilization: 50
          type: Utilization
      type: Resource
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: argo-cd-argocd-repo-server

And the error we get with this is

ComparisonError: unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler
beyondbill commented 2 years ago

Should we consider reopen the issue given 2.4.0 does not seem fix it based on several comments above?

jsirianni commented 1 year ago

I am also seeing this when upgrading hpa from v2beta to v2.

argocd version
argocd: v2.4.14+029be59
  BuildDate: 2022-10-05T17:37:30Z
  GitCommit: 029be590bfd5003d65ddabb4d4cb8a31bff29c18
  GitTreeState: clean
  GoVersion: go1.18.6
  Compiler: gc
  Platform: linux/amd64
mgarstecki commented 1 year ago

We used to have the same error on v2.3, but it works now on v2.4.15 and K8S 1.23:

{
    "Version": "v2.4.15+05acf7a",
    "BuildDate": "2022-10-17T20:32:39Z",
    "GitCommit": "05acf7a52e377eacfee29c68e3e5e79a172ea013",
    "GitTreeState": "clean",
    "GoVersion": "go1.18.7",
    "Compiler": "gc",
    "Platform": "linux/amd64",
    "KustomizeVersion": "v4.4.1 2021-11-11T23:36:27Z",
    "HelmVersion": "v3.8.1+g5cb9af4",
    "KubectlVersion": "v0.23.1",
    "JsonnetVersion": "v0.18.0"
}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.10-eks-15b7512", GitCommit:"cd6399691d9b1fed9ec20c9c5e82f5993c3f42cb", GitTreeState:"clean", BuildDate:"2022-08-31T19:17:01Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
EppO commented 1 year ago

Using 2.4.14, just tested with a autoscaling/v2 manifest (checked using argocd app manifests command as kubectl get hpa will give the latest available API version)

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  labels:
    argocd.argoproj.io/instance: myapp
  name: myapp-hpa
  namespace: default
spec:
  maxReplicas: 2
  metrics:
  - resource:
      name: cpu
      target:
        averageUtilization: 80
        type: Utilization
    type: Resource
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp-deploy

argocd can sync it correctly:

...
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to master (8f27b1c)
Health Status:      Healthy
...
autoscaling                 HorizontalPodAutoscaler  default  myapp-hpa                     Synced   Healthy        horizontalpodautoscaler.autoscaling/myapp-hpa unchanged
...

my setup:

$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.3
$ argocd version
argocd: v2.4.11+3d9e9f2
  BuildDate: 2022-08-22T09:35:38Z
  GitCommit: 3d9e9f2f95b7801b90377ecfc4073e5f0f07205b
  GitTreeState: clean
  GoVersion: go1.18.5
  Compiler: gc
  Platform: linux/amd64
argocd-server: v2.4.14+029be59
  BuildDate: 2022-10-05T17:15:37Z
  GitCommit: 029be590bfd5003d65ddabb4d4cb8a31bff29c18
  GitTreeState: clean
  GoVersion: go1.18.7
  Compiler: gc
  Platform: linux/amd64
  Kustomize Version: v4.4.1 2021-11-11T23:36:27Z
  Helm Version: v3.8.1+g5cb9af4
  Kubectl Version: v0.23.1
  Jsonnet Version: v0.18.0
srclark213 commented 1 year ago

I haven't been able to recreate on a fresh cluster, I'm only seeing this issue on a server that we've recently upgraded both k8s and argocd. We've been slowly trying to narrow down our dependencies to figure out if one of these is causing this. Is there any way to check the version of the gitops engine we're running?

EppO commented 1 year ago

Use argocd app manifests command to see how the manifests look like when ArgoCD processes your Helm Chart, there might be something wrong with your Helm Chart that produces incorrect HPA CRDs. Any ArgoCD version >= 2.4.0 is using a GitOps Engine library that supports autoscaling/v2

myrondev commented 1 year ago

I have this issue with ArgoCD 2.2.2 and autoscaling/v1. Hpa works fine:

So only one issue here, in Argo in events I see that it's no healthy with the message: unsupported HPA GVK: autoscaling/v2, Kind=HorizontalPodAutoscaler. But I am using autoscaling/v1, strange...

Kubectl: Client Version: v1.25.2 Kustomize Version: v4.5.7 Server Version: v1.23.4

fleeco commented 1 year ago

I'm having the same issue here. Argo helm chart 3.35.4 app version v2.2.5

fleeco commented 1 year ago

I was dumb. The problem was HPA couldn't access metrics - check your security groups from Cluster to the node group <3

brsolomon-deloitte commented 1 year ago

Was seeing this issue with ArgoCD 2.1.x with autoscaling/v2, and upgrading to 2.5.5 via https://raw.githubusercontent.com/argoproj/argo-cd/v2.5.5/manifests/install.yaml resolved it immediately. (Kubernetes: EKS 1.24)

ashish1099 commented 1 year ago

seeing the issue on latest argocd 5.22.1 and on k8s 1.23.16

tdogsizle commented 1 year ago

Seeing this issue as well argocd helm chart: 5.27.5 argocd app version: v2.6.7 k8s: 1.25

sergeyignatov commented 1 year ago

a workaround we use for disabling health check for the HPA


resource.customizations: |-
      autoscaling/HorizontalPodAutoscaler:
        health.lua: |
          hs = {}
          hs.status = "Healthy"
          hs.message = "Ignoring HPA Health Check"
          return hs
EppO commented 1 year ago

Are we still talking about a failing sync here? I mean ComparisonError/SyncError. If you are experiencing OutOfSync HPAs, it's a different problem with a workaround described in the diff FAQ:

For Horizontal Pod Autoscaling (HPA) objects, the HPA controller is known to reorder spec.metrics in a specific order. See kubernetes issue #74099. To work around this, you can order spec.metrics in Git in the same order that the controller prefers.

raosharadhi commented 1 year ago

Hi Team, Anyone found the solution. I am also facing same issues. It was working fine till yesterday. Now when I sync today it says "server could not find the requested resource" and failing the sync. any help would be appreciated

AidarGatin commented 1 year ago

Hi Guys, If you can't update ArgoCD this might help you :

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  labels:
    argocd.argoproj.io/instance: myapp
  name: myapp-hpa
  namespace: default
...

v1 is the default version of HPA But you actual HPA version will be autoscaling/v1, guess k8s force it to be v2.

Works on setup: K8s Rev: v1.27.4-eks Argo: v2.3.3

NiklasRosenstein commented 9 months ago

I'm also facing this issue trying to deploy the GitLab Helm chart with ArgoCD. I'm using K3s v1.27 and there's no autoscaling/v1 or autoscaling/v1beta1 that I could use instead. 👀 It basically renders ArgoCD unable to sync any changes after initial deployment.

gxpd-jjh commented 8 months ago

For anyone facing this issue: 1) Make sure you bump a new version of your chart using the new HPA - after you upgrade to Argo > 2.4 2) In your HPA spec, make sure that you are using the new metrics: syntax and your metrics has content. (My mistake was I had an empty metrics: after some if/else)