argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
17.86k stars 5.45k forks source link

failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:argocd:argocd-server" cannot list resource "configmaps" in API group "" in the namespace "argocd" #9392

Closed kotalakshman closed 8 months ago

kotalakshman commented 2 years ago

Checklist:

Describe the bug

To Reproduce

Expected behavior

Screenshots

Version

Paste the output from `argocd version` here.

Logs

Paste any relevant application logs here.
kotalakshman commented 2 years ago

I have deployed kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

danielhelfand commented 2 years ago

Hey @kotalakshman,

Could you share the output of argocd version and kubectl version?

jaki300 commented 2 years ago

{ "metadata": {}, "items": [{ "server": "https://kubernetes.default.svc", "name": "in-cluster", "config": { "tlsClientConfig": { "insecure": false } }, "connectionState": { "status": "Failed", "message": "failed to sync cluster https://172.30.0.1:443: failed to load initial state of resource ConfigMap: configmaps is forbidden: User \"system:serviceaccount:argocd1:argocd-application-controller\" cannot list resource \"configmaps\" in API group \"\" at the cluster scope", "attemptedAt": "2022-10-23T14:39:11Z" }, "serverVersion": "1.20", "info": { "connectionState": { "status": "Failed", "message": "failed to sync cluster https://172.30.0.1:443: failed to load initial state of resource ConfigMap: configmaps is forbidden: User \"system:serviceaccount:argocd1:argocd-application-controller\" cannot list resource \"configmaps\" in API group \"\" at the cluster scope", "attemptedAt": "2022-10-23T14:39:11Z" }, "serverVersion": "1.20", "cacheInfo": {}, "applicationsCount": 3, "apiVersions": ["admissionregistration.k8s.io/v1", "admissionregistration.k8s.io/v1beta1", "apiextensions.k8s.io/v1", "apiextensions.k8s.io/v1beta1", "apiregistration.k8s.io/v1", "apiregistration.k8s.io/v1beta1", "apps.openshift.io/v1", "apps/v1", "argoproj.io/v1alpha1", "authentication.maistra.io/v1", "authorization.openshift.io/v1", "autoscaling.openshift.io/v1", "autoscaling.openshift.io/v1beta1", "autoscaling/v1", "autoscaling/v2beta1", "autoscaling/v2beta2", "batch/v1", "batch/v1beta1", "build.openshift.io/v1", "certificates.k8s.io/v1", "certificates.k8s.io/v1beta1", "cloudcredential.openshift.io/v1", "config.istio.io/v1alpha2", "config.openshift.io/v1", "console.openshift.io/v1", "console.openshift.io/v1alpha1", "containo.us/v1alpha1", "controlplane.operator.openshift.io/v1alpha1", "coordination.k8s.io/v1", "coordination.k8s.io/v1beta1", "core.strimzi.io/v1beta2", "discovery.k8s.io/v1beta1", "events.k8s.io/v1", "events.k8s.io/v1beta1", "extensions.istio.io/v1alpha1", "extensions/v1beta1", "federation.maistra.io/v1", "flowcontrol.apiserver.k8s.io/v1alpha1", "flowcontrol.apiserver.k8s.io/v1beta1", "helm.openshift.io/v1beta1", "image.openshift.io/v1", "imageregistry.operator.openshift.io/v1", "ingress.operator.openshift.io/v1", "install.istio.io/v1alpha1", "integreatly.org/v1alpha1", "jaegertracing.io/v1", "k8s.cni.cncf.io/v1", "kafka.strimzi.io/v1alpha1", "kafka.strimzi.io/v1beta1", "kafka.strimzi.io/v1beta2", "kiali.io/v1alpha1", "kubernetes.zabbix.com/v1alpha1", "logging.openshift.io/v1", "machine.openshift.io/v1beta1", "machineconfiguration.openshift.io/v1", "maistra.io/v1", "maistra.io/v1alpha1", "maistra.io/v2", "metal3.io/v1alpha1", "migration.k8s.io/v1alpha1", "monitoring.coreos.com/v1", "monitoring.coreos.com/v1alpha1", "monitoring.kiali.io/v1alpha1", "network.openshift.io/v1", "network.operator.openshift.io/v1", "networking.istio.io/v1alpha3", "networking.istio.io/v1beta1", "networking.k8s.io/v1", "networking.k8s.io/v1beta1", "node.k8s.io/v1", "node.k8s.io/v1beta1", "oauth.openshift.io/v1", "operator.openshift.io/v1", "operator.openshift.io/v1alpha1", "operators.coreos.com/v1", "operators.coreos.com/v1alpha1", "operators.coreos.com/v1alpha2", "operators.coreos.com/v2", "pipelines.openshift.io/v1alpha1", "policy/v1beta1", "project.openshift.io/v1", "quota.openshift.io/v1", "rbac.authorization.k8s.io/v1", "rbac.authorization.k8s.io/v1beta1", "rbac.istio.io/v1alpha1", "rbac.maistra.io/v1", "route.openshift.io/v1", "samples.operator.openshift.io/v1", "scheduling.k8s.io/v1", "scheduling.k8s.io/v1beta1", "security.internal.openshift.io/v1", "security.istio.io/v1beta1", "security.openshift.io/v1", "snapshot.storage.k8s.io/v1", "snapshot.storage.k8s.io/v1beta1", "storage.k8s.io/v1", "storage.k8s.io/v1beta1", "telemetry.istio.io/v1alpha1", "template.openshift.io/v1", "traefik.containo.us/v1alpha1", "tuned.openshift.io/v1", "user.openshift.io/v1", "v1", "weblogic.oracle/v7", "weblogic.oracle/v8", "whereabouts.cni.cncf.io/v1alpha1"] } }] }

ghost commented 11 months ago

The same problem, deployed in HA mode

      containers:
        - name: argocd-server
          image: 'quay.io/argoproj/argocd:v2.9.2'
          args:
            - /usr/local/bin/argocd-server
          ports:
            - containerPort: 8080
              protocol: TCP
            - containerPort: 8083
              protocol: TCP
2023-12-13T10:38:21.509136651+08:00 E1213 02:38:21.508863       7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.24.2/tools/cache/reflector.go:167: Failed to watch *v1.Secret: unknown (get secrets)

2023-12-13T10:38:55.158350031+08:00 W1213 02:38:55.158194       7 reflector.go:324] pkg/mod/k8s.io/client-go@v0.24.2/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:argocd:argocd-server" cannot list resource "configmaps" in API group "" in the namespace "argocd"

2023-12-13T10:38:55.158381132+08:00 E1213 02:38:55.158242       7 reflector.go:138] pkg/mod/k8s.io/client-go@v0.24.2/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:argocd:argocd-server" cannot list resource "configmaps" in API group "" in the namespace "argocd"
jgwest commented 8 months ago

This should be resolved, install YAML includes correct ClusterRoles when installed via cluster-scoped install.