Closed pauliuspetka closed 3 years ago
I am sorry but I am not able to reproduce the issue. I am installing the chart using the following command:
$ helm install external-dns bitnami/external-dns -n kube-system --set rbac.clusterRole=False
NAME: external-dns
LAST DEPLOYED: Fri Nov 20 15:48:03 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
To verify that external-dns has started, run:
kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=external-dns,app.kubernetes.io/instance=external-dns"
Then I can see the release, the pod and the logs:
$ helm ls --namespace kube-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
external-dns kube-system 1 2020-11-20 15:48:03.065087062 +0000 UTC deployed external-dns-4.0.0 0.7.4
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
external-dns-7764f6bf64-xg9cf 1/1 Running 0 97s
$ kubectl logs external-dns-7764f6bf64-xg9cf --namespace kube-system
time="2020-11-20T15:48:08Z" level=info msg="config: {APIServerURL: KubeConfig: RequestTimeout:30s ContourLoadBalancerService:heptio-contour/contour SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s DomainFilter:[] ExcludeDomains:[] ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:1m0s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50}"
time="2020-11-20T15:48:08Z" level=info msg="Instantiating new Kubernetes client"
time="2020-11-20T15:48:08Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2020-11-20T15:48:08Z" level=info msg="Created Kubernetes client https://10.30.240.1:443"
I can also check what are the values used by running:
$ helm get values external-dns --namespace kube-system
USER-SUPPLIED VALUES:
rbac:
clusterRole: false
How are you installing the chart? Are you following a different process?
Hello, thank you for taking a look into this.
will provide same outputs
$ helm install external-dns bitnami/external-dns -n kube-system --set rbac.clusterRole=False
NAME: external-dns
LAST DEPLOYED: Fri Nov 20 18:19:39 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
To verify that external-dns has started, run:
kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=external-dns,app.kubernetes.io/instance=external-dns"
$ helm ls --namespace kube-system
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
external-dns kube-system 1 2020-11-20 18:19:39.265091 +0200 EET deployed external-dns-4.0.0 0.7.4
$ kubectl logs -f external-dns-7764f6bf64-kcm2c --namespace kube-system
time="2020-11-20T16:20:47Z" level=info msg="config: {APIServerURL: KubeConfig: RequestTimeout:30s ContourLoadBalancerService:heptio-contour/contour SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s DomainFilter:[] ExcludeDomains:[] ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:1m0s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50}"
time="2020-11-20T16:20:47Z" level=info msg="Instantiating new Kubernetes client"
time="2020-11-20T16:20:47Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2020-11-20T16:20:47Z" level=info msg="Created Kubernetes client https://172.20.0.1:443"
time="2020-11-20T16:21:47Z" level=fatal msg="failed to sync cache: timed out waiting for the condition"
$ helm get values external-dns --namespace kube-system
USER-SUPPLIED VALUES:
rbac:
clusterRole: false
What kubernetes version you are using? Could it be dependent? We are using v.1.16
The version shouldn't be an issue for that, I am also using 1.16. Are you able to not reproduce the issue in any scenario in the same cluster? I mean, using a different namespace or even without specifying a namespace, setting rbac as true, etc
If RBAC set as true and cluster role is created then external dns works fine. But if we do not create cluster role and set rbac as false it immediately fails. Even tho in same namespace. Any ideas?
I'm sorry but I am running out of ideas, it seems something related to the specific cluster configuration or any kind of permission/auth in the setup since it's really weird that the issue is not reproduced without rbac in my cluster and using the proper cluster role in yours. Looking into the external-dns repo, it seems there are some issues pointing in the same direction, such as https://github.com/kubernetes-sigs/external-dns/issues/961
If we create without rbac.clusterRole=false helm cart creates cluster role, which gives access to whole cluster, thats why its working, but if we want to restrict access to namespace with this parameter it fails.
$ kubectl -n kube-system describe role external-dns
Name: external-dns
Labels: app.kubernetes.io/instance=external-dns
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=external-dns
helm.sh/chart=external-dns-4.0.0
Annotations: meta.helm.sh/release-name: external-dns
meta.helm.sh/release-namespace: kube-system
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
services [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
gateways.networking.istio.io [] [] [get list watch]
virtualservices.networking.istio.io [] [] [get list watch]
ingresses.networking.k8s.io [] [] [get list watch]
routegroups.zalando.org [] [] [get list watch]
httpproxies.projectcontour.io [] [] [get watch list]
routegroups.zalando.org/status [] [] [patch update]
$ kubectl -n kube-system describe rolebinding external-dns
Name: external-dns
Labels: app.kubernetes.io/instance=external-dns
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=external-dns
helm.sh/chart=external-dns-4.0.0
Annotations: meta.helm.sh/release-name: external-dns
meta.helm.sh/release-namespace: kube-system
Role:
Kind: Role
Name: external-dns
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount external-dns kube-system
I see, by default, those are the objects created when enabling rbac:
# Source: external-dns/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
helm.sh/chart: external-dns-4.3.1
app.kubernetes.io/instance: external-dns
app.kubernetes.io/managed-by: Helm
---
# Source: external-dns/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
helm.sh/chart: external-dns-4.3.1
app.kubernetes.io/instance: external-dns
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
- ""
resources:
- services
- pods
- nodes
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.istio.io
resources:
- gateways
- virtualservices
verbs:
- get
- list
- watch
- apiGroups:
- zalando.org
resources:
- routegroups
verbs:
- get
- list
- watch
- apiGroups:
- zalando.org
resources:
- routegroups/status
verbs:
- patch
- update
- apiGroups:
- projectcontour.io
resources:
- httpproxies
verbs:
- get
- watch
- list
---
# Source: external-dns/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
helm.sh/chart: external-dns-4.3.1
app.kubernetes.io/instance: external-dns
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
if according to your needs you think this is too open, can you modify the objects until meeting the requirements of your use case?
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
This is not resolved. Installing the external-dns chart with the following config:
rbac:
create: true
clusterRole: false
provider: digitalocean
digitalocean:
apiToken: token
logLevel: trace
interval: "1m"
policy: sync # or upsert-only
domainFilters: ['domain.org']
namespace: staging
As expected, no ClusterRole
is created (since we want this to be namespace only). A Role and a RoleBinding are created.
time="2021-03-04T12:30:38Z" level=debug msg="apiServerURL: "
time="2021-03-04T12:30:38Z" level=debug msg="kubeConfig: "
time="2021-03-04T12:30:38Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2021-03-04T12:30:38Z" level=info msg="Created Kubernetes client https://10.245.0.1:443"
time="2021-03-04T12:31:38Z" level=fatal msg="failed to sync cache: timed out waiting for the condition"
What exact condition is it waiting for?
@carrodher apologies for the ping, but since stalebot will not reopen this and it is very likely that a comment on a closed issue will go unnoticed I see no good alternative. Could you please reopen this issue, I'm available to further debug the issue and provide whatever information is needed.
Workaround found and posted in https://github.com/kubernetes-sigs/external-dns/issues/961
Thanks for the information! Do you think there is room for improving the chart itself?
One option would be to always create the node watcher role at cluster level... Or code needs to be changed to not require it.
I will create an internal task to investigate this issue, but as the impact is limited and due to other priorities on our backlog, I am not sure if it is going to be addressed in the short term. Would you like to contribute by creating a PR? The team will be happy to review it and incorporate the solution into the next version of the chart.
I can't promise a PR will follow; have no idea on how charts are structured or applied so it'll require some research.
Also I'm pretty new to k8s and thus not sure my solution is a good solution.
One thing that needs to be researched by implementers of external-dns
is if access to the nodes
resource is really a requirement or if it could be done without...
Hi @carrodher
Is there an issue somewhere that you can link to for the investigation of if the node
permissions are actually required? Changes to the template don't make any sense if the dependency can be removed instead.
Thanks
In the bitnami/charts repo, there are no open issues related to that at this moment, the only issue I am aware of in the upstream repo that can be related to this topic is https://github.com/bitnami/charts/issues/4446.
Hi @SamMousa
I had a look at the code and found that the service
source does require node
permissions, but ingress
service does not, which matches my use case.
However, by default, namespace
is set to ''
, so the service timeouts as it tries to check ingress in all namespaces, but fails as there is no clusterRole
.
I found the following config to work:
services:
- ingress
rbac:
clusterRole: false
namespace: <insert your namespace>
I have raised a PR that should remove the need to set the namespace yourself https://github.com/bitnami/charts/pull/5808
Would disabling the service
source have consequences?
Would disabling the
service
source have consequences?
Depends how you are obtaining your external IPs, using service
or ingress
objects. See sources and providers for more info. If you use ingress
for all incoming traffic, then this works perfectly.
Which chart: helm-chart version: 4.0.0 External-DNS version (use external-dns --version): 0.7.4
Describe the bug Getting failed to sync cache: timed out waiting for the condition while deploying external dns with parameter rbac.create = false and custom namespace We create multiple namespaces and multiple external dns.
To Reproduce
Easiest way to reproduce
helm install external-dns bitnami/external-dns -n kube-system --set rbac.clusterRole=False
Expected behavior
Create external dns with parameter rbac.create = false in custom namespace
Version of Helm and Kubernetes:
helm version
:kubectl version
: