argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
18.06k stars 5.52k forks source link

argo-cd only detect the api-version from the cluster where it's deployed #13909

Open ervikrant06 opened 1 year ago

ervikrant06 commented 1 year ago

Checklist:

Describe the bug

Using single instance of argo-cd to manage multiple deployments.

We have existing deployment of argocd running on kube 1.13.3 cluster.

Now we are in process of k8s cluster upgrades. we have deployed argocd on new v1.24.2 version cluster but we can't manage the applications running on 1.13.3 cluster (Taking prometheus as an example)

It's using the following API extension which is only available on 1.13.3 not on 1.24.2

$ kubectl api-versions  | grep apiextensions.k8s.io
apiextensions.k8s.io/v1beta1

argocd deployed on 1.24.2 picking up the api-extensions from the local cluster which doesn't include above mentioned api-extension but have:

$ kubectl api-versions  | grep apiextensions.k8s.io
apiextensions.k8s.io/v1

it makes argocd never able to complete the prometheus sync process. How to make argocd cluster the api-extensios from the remote managed cluster instead of local?

To Reproduce

Expected behavior

Screenshots

Version

$ argocd version
argocd: v2.7.3+e7891b8.dirty
  BuildDate: 2023-05-24T15:05:34Z
  GitCommit: e7891b899a35dca06ae94965ea5ae2a86b344848
  GitTreeState: dirty
  GoVersion: go1.19.6
  Compiler: gc
  Platform: linux/amd64
FATA[0000] Argo CD server address unspecified

Logs

Paste any relevant application logs here.
llavaud commented 11 months ago

I would like to have a solution for this usecase too, but currently I think it is not possible as the rendering occurs in the repo-server component and it don't have access to remote clusters here... :(

jgwest commented 9 months ago

Kubernetes version that the bug is reported against is out of support for Argo CD: https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/#supported-versions

llavaud commented 9 months ago

I think this issue should stay open, the fact that in the example it reference an outdated and no longer supported Kubernetes version doesn't change the underlaying problem...

I think this issue should stay open, the fact that in the example it reference an outdated and no longer supported Kubernetes version doesn't change the underlaying problem...

jgwest commented 9 months ago

@llavaud @ervikrant06 Can you describe your expected behaviour here? As you said, the repo server doesn't have access to the cluster, by design. So there is no way for us to modify the generated manifests based on the cluster API version.

llavaud commented 9 months ago

@llavaud @ervikrant06 Can you describe your expected behaviour here? As you said, the repo server doesn't have access to the cluster, by design. So there is no way for us to modify the generated manifests based on the cluster API version.

I don't have the solution, but I think we should keep it open to discuss about possible solutions or workarounds. The current design is problematic in a hub and spoke pattern with Helm Chart, as api-versions and kube-version provided to the Chart are wrong and can lead to invalid manifests.