kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.72k stars 2.57k forks source link

Kubernetes ClusterRole not updated with Traefik apiGroups #3960

Closed bodanc closed 6 months ago

bodanc commented 1 year ago

What happened:

I deployed external-dns via Helm in my AWS EKS cluster:

~ helm --kubeconfig=./kubeconfig.exp-1.yaml list
NAME            NAMESPACE   REVISION    UPDATED                                 STATUS      CHART                   APP VERSION
external-dns    default     1           2023-09-26 13:20:27.144529 +0200 CEST   deployed    external-dns-6.26.1     0.13.6

My external-dns Helm values.yaml file contains only the following config. changes:

args:
  - "--source=traefik-proxy"
  - "--provider=aws"
  - "--aws-zone-type="
  - "--policy=sync"
  - "--registry=txt"
  - "--interval=1m"
  - "--aws-api-retries=3"
  - "--aws-batch-change-size=100"
  - "--log-level=debug"
  - "--log-format=text"

Very soon after having been deployed, the external-dns pod begins to restart in a loop:

time="2023-09-26T11:43:22Z" level=fatal msg="failed to sync traefik.io/v1alpha1, Resource=ingressroutes: context deadline exceeded"

If, however, I patch the external-dns-default ClusterRole, everything works:

~ kubectl edit clusterroles.rbac.authorization.k8s.io/external-dns-default

- apiGroups:
  - traefik.containo.us
  - traefik.io
  resources:
  - ingressroutes
  - ingressroutetcps
  - ingressrouteudps
  verbs:
  - get
  - watch
  - list

What you expected to happen:

Unless I'm mistaken or missing something, I would expect that:

As per: https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/traefik-proxy.md#manifest-for-clusters-with-rbac-enabled

IF external-dns is passed the --source=traefik-proxy arg at startup, the external-dns-default ClusterRole will dynamically be adjusted with the correct Traefik apiGroups.

How to reproduce it (as minimally and precisely as possible):

Please see above :)

Anything else we need to know?:

Environment:

mertsaygi commented 1 year ago

+1

dragonlipz commented 11 months ago

This appears not limited to Traefik. In AzureAKS using Istio the same thing occurs. The expected extra apiGroups are not added when specifying extra args in the values.yaml Manually applying the missing apigroups to the cluster role fixes the problem. (assuming all other security aspects are correct)

k8s: 1.27.3 helm: v3.12.3

The helm template. charts/external-dns/templates/clusterrole.yaml has the sections needed but they don't appear to apply properly.

Chart installation against cluster

$ helm upgrade -f ./azuredns.yaml --namespace external-dns --install external-dns external-dns/external-dns
Release "external-dns" has been upgraded. Happy Helming!
NAME: external-dns
LAST DEPLOYED: Wed Nov 22 10:33:02 2023
NAMESPACE: external-dns
STATUS: deployed
REVISION: 6
TEST SUITE: None
NOTES:
***********************************************************************
* External DNS                                                        *
***********************************************************************
  Chart version: 1.13.1
  App version:   0.13.6
  Image tag:     registry.k8s.io/external-dns/external-dns:v0.13.6
***********************************************************************

azuredns.yaml used above

fullnameOverride: external-dns

serviceAccount:
  annotations:
    azure.workload.identity/client-id: 00000000-0000-0000-0000-000000000000

podLabels:
  azure.workload.identity/use: "true"

provider: azure

extraArgs:
  - --domain-filter=somedomain.com
  - --source=istio-virtualservice

secretConfiguration:
  enabled: true
  mountPath: "/etc/kubernetes/"
  data:
    azure.json: |
      {
        "tenantId": "00000000-0000-0000-0000-000000000000",
        "subscriptionId": "00000000-0000-0000-0000-000000000000",
        "resourceGroup": "networking",
        "useWorkloadIdentityExtension": true
      }

Missing apiGroups that need manually added to ClusterRole/external-dns

- apiGroups:
  - networking.istio.io
  resources:
  - virtualservices
  - gateways
  verbs:
  - get
  - watch
  - list
k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 6 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/external-dns/issues/3960#issuecomment-2067767996): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.