Closed chrisz100 closed 5 years ago
/assign @SantoDE
That is interesting. I tried even default installs without issues. I'll have a look tommorow and see what I can pull up.
second note:
I gave it another test and I can not reproduce the issue.
helm install stable/external-dns
NAME: lumpy-seal
LAST DEPLOYED: Sun Jun 23 23:07:45 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
lumpy-seal-external-dns-7688c48dc8-xl8ls 0/1 ContainerCreating 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
lumpy-seal-external-dns ClusterIP 10.103.66.112 <none> 7979/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
lumpy-seal-external-dns 1 1 1 0 1s
ubectl logs lumpy-seal-external-dns-7688c48dc8-xl8ls
time="2019-06-23T21:07:47Z" level=info msg="config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGatewayServices:[istio-system/istio-ingressgateway] Sources:[service ingress] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: DomainFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false CloudflareZonesPerPage:50 RcodezeroTXTEncrypt:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:default TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false NS1Endpoint: NS1IgnoreSSL:false TransIPAccountName: TransIPPrivateKeyFile:}"
time="2019-06-23T21:07:47Z" level=info msg="Created Kubernetes client https://10.96.0.1:443"
kubectl get crd
No resources found.
Can you try supplying --set rbac.create=true
Thats the only thing I did differently
Ah, I had some old charts cached. Refresh of that and I can see, that it creates the CRD by default. As I thought / think its fine. I dont have any errors.
However, I agree a "crd.create" switch would be a good idea. What do you think, should the CRD be a default source? I still tend to say yes, as it's in line with the current status. I'm eager to know about other ideas tough :)
I don't really mind creating the CRD's by default - just what would be required is an extended set of RBAC that gets deployed with rbac.create=true
The reason I even tried deactivating this is simply that it stopped working at all because of it could not do one of the things:
Fixing issue number one would probably be the most efficient.
The same error here. Fixed it by adding the following to clusterrole external-dns
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- list
- watch
Hey @nirdothan, @chrisz100,
I just raised a PR to fix that issue. Happy to see your test!
@SantoDE just did the same thing... I'll delete mine! Looks good!
Haha, I nearly had a PR in for this too. Good to see many eyes on it. :)
sorry to say but it looks like this PR did not fix it properly, the issue persists:
time="2019-06-26T14:40:01Z" level=error msg="dnsendpoints.externaldns.k8s.io is forbidden: User \"system:serviceaccount:cluster-externaldns:ed-external-dns\" cannot list resource \"dnsendpoints\" in API group \"externaldns.k8s.io\" at the cluster scope"
looks like the wrong apigroup is now being referenced in clusterrole.. you're incorrectly referencing apiextensions.k8s.io/v1beta1
(which btw also shouldn't versioned)..
whereas the the correct apigroup that has the dnsendpoints CRD is externaldns.k8s.io
(as can be seen in the error)
@chrisz100 or @SantoDE can you please re-submit the PR with the correct apigroup referenced?
I’ll look into it later today
/reopen /assign
Oh sorry! Good you taking over @chrisz100. Let me know if should I jump in
Done, would be great if you could have a look @SantoDE
Done. :) Thanks
Describe the bug It is impossible to deploy the current external-dns chart as it always crashes.
As it seems the latest change (https://github.com/helm/charts/pull/14961) starts installing the crd and automatically adding crd as a source as well (even if I specify
crd: false
as it doesn't affect the deployment listing crd as possible source.Also getting it created fails as the rbac extensions for the crd are not shipped.
chart version 1.8.0
Which chart: stable/external-dns
How to reproduce it (as minimally and precisely as possible): Just a regular helm install stable/external-dns