argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
18k stars 5.48k forks source link

argocd-application-controller manager appears in managedFields when creating a Role #20815

Open sachaos opened 5 days ago

sachaos commented 5 days ago

Checklist:

Describe the bug

When resources like Deployments are created, the managedFields indicates that they were applied by argocd-controller. However, when creating a Role, the managedFields shows a manager named argocd-application-controller.

Is this the expected behavior?

Sometimes, I notice that a Deployment has argocd-application-controller set as the manager. I’m not sure how to reproduce this behavior now. If you know the conditions under which this manager is used, please let me know.

To Reproduce

Create a Kubernetes cluster by kind. I think using kind is not required to reproduce the problem.

kind create cluster -n argocd-sandbox

Setup ArgoCD.

kubectl create namespace argocd --context kind-argocd-sandbox
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml --context kind-argocd-sandbox

Create app.yaml to load manifest from https://github.com/sachaos/20241117-argocd-application-controller . The repository contains the manifests of a Deployment and a Role.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: example-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/sachaos/20241117-argocd-application-controller.git
    targetRevision: master
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      prune: true
    syncOptions:
      - ServerSideApply=true
      - ApplyOutOfSyncOnly=true
      - PruneLast=true

Apply app.yaml.

kubectl apply -f app.yaml --context kind-argocd-sandbox

Check the manager of the created Role. We can see there is a argocd-application-controller .

❯ kubectl get role pod-manager-role --show-managed-fields -o json | jq '.metadata.managedFields[].manager'
"argocd-controller"
"argocd-application-controller"

Check the manager of the created Deployment. We can not see there is a argocd-application-controller . I think this is the expected behavior.

❯ kubectl get deploy example-deployment --show-managed-fields -o json | jq '.metadata.managedFields[].manager'
"argocd-controller"
"kube-controller-manager"

Expected behavior

The manager argocd-application-controller does not appear in managedFields.

Screenshots

Version

❯ argocd version
argocd: v2.10.7+b060053.dirty
  BuildDate: 2024-04-15T12:31:39Z
  GitCommit: b060053b099b4c81c1e635839a309c9c8c1863e9
  GitTreeState: dirty
  GoVersion: go1.22.2
  Compiler: gc
  Platform: darwin/arm64
argocd-server: v2.13.0+347f221
  BuildDate: 2024-11-04T12:09:06Z
  GitCommit: 347f221adba5599ef4d5f12ee572b2c17d01db4d
  GitTreeState: clean
  GoVersion: go1.23.1
  Compiler: gc
  Platform: linux/arm64
  Kustomize Version: v5.4.3 2024-07-19T16:40:33Z
  Helm Version: v3.15.4+gfa9efb0
  Kubectl Version: v0.31.0
  Jsonnet Version: v0.20.0

Logs

Paste any relevant application logs here.
andrii-korotkov-verkada commented 4 days ago

Looks like argocd-controller is what should be there for server-side apply.

    // ArgoCDSSAManager is the default argocd manager name used by server-side apply syncs
    ArgoCDSSAManager = "argocd-controller"

While argocd-application-controller is some default. Can you share which fields are shown as managed by which controller?

sachaos commented 4 days ago

@andrii-korotkov-verkada Yes. This is the managedFields.

Role

managedFields ``` kubectl get role pod-manager-role --show-managed-fields -o json | jq '.metadata.managedFields' [ { "apiVersion": "rbac.authorization.k8s.io/v1", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:labels": { "f:app.kubernetes.io/instance": {} } }, "f:rules": {} }, "manager": "argocd-controller", "operation": "Apply" }, { "apiVersion": "rbac.authorization.k8s.io/v1", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:labels": { ".": {}, "f:app.kubernetes.io/instance": {} } }, "f:rules": {} }, "manager": "argocd-application-controller", "operation": "Update", "time": "2024-11-17T12:36:44Z" } ] ```

Deployment

managedFields ``` kubectl get deploy example-deployment --show-managed-fields -o json | jq '.metadata.managedFields' [ { "apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:labels": { "f:app": {}, "f:app.kubernetes.io/instance": {} } }, "f:spec": { "f:replicas": {}, "f:selector": {}, "f:template": { "f:metadata": { "f:labels": { "f:app": {} } }, "f:spec": { "f:containers": { "k:{\"name\":\"nginx\"}": { ".": {}, "f:image": {}, "f:name": {}, "f:ports": { "k:{\"containerPort\":80,\"protocol\":\"TCP\"}": { ".": {}, "f:containerPort": {} } } } } } } } }, "manager": "argocd-controller", "operation": "Apply", "time": "2024-11-17T12:42:34Z" }, { "apiVersion": "apps/v1", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:deployment.kubernetes.io/revision": {} } }, "f:status": { "f:availableReplicas": {}, "f:conditions": { ".": {}, "k:{\"type\":\"Available\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} }, "k:{\"type\":\"Progressing\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} } }, "f:observedGeneration": {}, "f:readyReplicas": {}, "f:replicas": {}, "f:updatedReplicas": {} } }, "manager": "kube-controller-manager", "operation": "Update", "subresource": "status", "time": "2024-11-17T12:42:49Z" } ] ```
andrii-korotkov-verkada commented 4 days ago

Ah, my guess would be that for different operation there can be different managers for the same fields.

sachaos commented 3 days ago

Please let me know if you need more information!

andrii-korotkov-verkada commented 4 hours ago

Is there an immediate issue with having different managers?