kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.73k stars 2.57k forks source link

Multiple Istio Gateways (--istio-ingress-gateway) not working as expected #884

Closed bradenwright closed 4 years ago

bradenwright commented 5 years ago

I'm trying to get external-dns working with 2x Istio Gateways (I know there are open tickets around that). So I have 2 gateways deployed, the default: istio-system/istio-ingressgateway and a private gateway named: istio-system/istio-gateway-private

I've added the argument --istio-ingress-gateway to my install and I can see it:

$ kubctl describe po -n kube-system dns-external-dns-8565769687-22d44
Name:           dns-external-dns-8565769687-22d44
Namespace:      kube-system
Node:           gke-prod-gke-default-pool-ec5e5cd4-h8rc/10.204.0.30
Start Time:     Thu, 31 Jan 2019 12:14:07 -0600
Labels:         app=external-dns
                heritage=Tiller
                pod-template-hash=4121325243
                release=dns
Annotations:    <none>
Status:         Running
IP:             10.200.8.28
Controlled By:  ReplicaSet/dns-external-dns-8565769687
Containers:
  external-dns:
    Container ID:  docker://b0bd15910098d945a5bf779a6f25661290e1a4658a2baa3f7ca5f455380e292a
    Image:         registry.opensource.zalan.do/teapot/external-dns:v0.5.9
    Image ID:      docker-pullable://registry.opensource.zalan.do/teapot/external-dns@sha256:3238aaf10240d6322c87073766f67d8cf24dc2db4f0ef01b62d18890573f8075
    Port:          7979/TCP
    Host Port:     0/TCP
    Args:
      --log-level=info
      --domain-filter=public.ds24-prod.com
      --policy=upsert-only
      --provider=google
      --registry=txt
      --source=service
      --source=ingress
      --source=istio-gateway
      --istio-ingress-gateway=istio-system/istio-ingressgateway

Even though --istio-ingress-gateway=istio-system/istio-ingressgateway is set its still publishing dns records from my other gateway:

n$ kc logs -n kube-system -f dns-external-dns-8565769687-22d44
time="2019-01-31T18:14:08Z" level=info msg="config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGateway:istio-system/istio-ingressgateway Sources:[service ingress istio-gateway] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:google GoogleProject:ds-prod-226112 DomainFilter:[public.ds24-prod.com] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSAssumeRole: AWSBatchChangeSize:4000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:true InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:default TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false}"
time="2019-01-31T18:14:08Z" level=info msg="Created Kubernetes client https://10.204.16.1:443"
time="2019-01-31T18:14:08Z" level=info msg="Created Istio client"
time="2019-01-31T18:14:09Z" level=info msg="All records are already up to date"
time="2019-01-31T18:15:09Z" level=info msg="Change zone: public-ds24-prod-com"
time="2019-01-31T18:15:09Z" level=info msg="Add records: auth.public.ds24-prod.com. A [35.195.181.246] 300"
time="2019-01-31T18:15:09Z" level=info msg="Add records: auth.public.ds24-prod.com. TXT [\"heritage=external-dns,external-dns/owner=default,external-dns/resource=gateway/istio-system/istio-private\"] 300"

IstioIngressGateway:istio-system/istio-ingressgateway is configured as you can see in the logs, and as you can see from the TXT record this part of gateway istio-system/istio-private

I did noticed that PR https://github.com/kubernetes-incubator/external-dns/pull/758 was closed and never merged, maybe its related. I think resurrecting that PR would help with mutiple gateways open issues: https://github.com/kubernetes-incubator/external-dns/issues/757 and https://github.com/kubernetes-incubator/external-dns/issues/759

Anyways I don't mind pitching in if needed, but I really need a resolution for this. If the flag worked, me thought was that I could run 2 external-dns charts, 1 for each --istio-ingress-gateway, and that would at least hold me over til support for multiple gateways was ready

bradenwright commented 5 years ago

Here's my gateway if anyone is interested.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-public
  namespace: {{ .Release.Namespace }}
spec:
  selector:
    istio: {{ .Values.gateway.name }}
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "kiali.public.example.com"
    tls:
      httpsRedirect: false
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
    hosts:
    - "kiali.public.example.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-private
  namespace: {{ .Release.Namespace }}
spec:
  selector:
    istio: gateway-private
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "auth.public.example.com"
    tls:
      httpsRedirect: false
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
    hosts:
    - "auth.public.example.com"

And I'm running: Helm Chart External DNS 1.3.0 Helm Chart Istio 1.1.0

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.12-gke.1", GitCommit:"8c6cac7466d8b36ead34f89822e37eb6e4e011c8", GitTreeState:"clean", BuildDate:"2019-01-15T19:48:39Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
bradenwright commented 5 years ago

From a little more testing I did confirm that the flag drives which ip get published / used as the value of the A DNS record. So it does have some effect. It does however discover all gateways objects and publishes the host name regardless of if its tied to the same --istio-ingress-gateway or not.

jyoukhana2165 commented 5 years ago

We are seeing the exact same thing.

julientyro commented 5 years ago

We hit the same issue. We managed to work around the problem by annotating the Gateways and by using --annotation-filter option to select a subset of Gateways.

It would nice though if the default behaviour was to select Gateways associated to the targeted ingress gateway.

kish3007 commented 5 years ago

@julientyro can you share more details on yaml on your workaround using -annotation-filter ?

julientyro commented 5 years ago

We've annotated our Gateways with labels to group them (e.g: type=private) Then, if you want a specific instance of external-dns to only load these Gateway configs, add flag --annotation-filter=type=private to your deployment options.

LorbusChris commented 5 years ago

@bradenwright AFAICT this is a configuration issue; there might be some confusion about the correct use of the --istio-ingress-gateway flag here:

With this flag you specify an Ingress Gateway, which is a LoadBalancer type Service. It does not refer to Gateway Configs, which hold your DNS entries.

In the case your configuration, because --namespace is not set, it will get all Gateway configs from all namespaces (obviously including your private gateway config) and add your configured ingress gateway load balancer service to them.

To get your desired behavior you could go the way described in above post (or just label private Gateway Configs type=private and start the public external-dns deployment with --annotation-filter=type!=private. Or you could move the private gateway object to another namespace and pass a --namespace=istio-system flag (with your public gateway config(s) living in that namespace).

Right now there is no way to specify exactly one Gateway Config to use. Only one namespace or all namespaces will be scanned for these objects. In case there are more than one, it will use them all.

I have a PR over in #907 to support specifying the --istio-ingress-gateway flag multiple times in order to add multiple load balancers to the endpoints.

w32-blaster commented 5 years ago

@LorbusChris: I personally tried using the namespace/service of the ingressgateway implementation and the behavior is the same as described above by others. All gateways are discovered regardless if one is attached to that specific ingress gateway or not

LorbusChris commented 5 years ago

@w32-blaster yes, looking at the code, that is expected. You need to add the --namespace=somenamespace in order to exclude gateways from other namespaces.

w32-blaster commented 5 years ago

That’s a strange behavior. I would not expect this. Does this mean that I should not have any Gatway definitions in that namespace, otherwise external dns will discover it?

LorbusChris commented 5 years ago

yes. You can narrow down what Gateways are picked up with specifying one namespace. Otherwise (.ie. when not specifying a namespace) all Gateways from all namespaces are picked up.

I agree that this is not intuitive. I think the --namespace flag should not default to all namespaces, and should be specifiable multiple times (like what I did for --istio-ingress-gateway flag in #907 ).

I have not used external dns in the context of other sources, but I'd assume this behavior is the same with the other sources? Can someone clarify? The k8s API is not query-able for multiple namespaces at once (either specify one namespace, or search across all namespaces), see https://github.com/kubernetes/kubernetes/issues/71032

luizferreira094 commented 5 years ago

Hi,

I added the label type=internal and type=external on my external-dns deployments and my istio deployment as well,

But i still can't do this work correctly, can you guys help me?

Here's my external-dns external deployment args

spec:
  containers:
  - args:
    - --source=istio-gateway
    - --source=service
    - --source=ingress
    - --annotation-filter=type=external
    - --istio-ingress-gateway=istio-system/istio-ingressgateway

Here's my external-dns internal deployment args

spec:
  containers:
  - args:
    - --source=istio-gateway
    - --source=service
    - --source=ingress
    - --annotation-filter=type=internal
    - --istio-ingress-gateway=istio-system/istio-internalgateway

And here are my gateways config:

External Ingress Gateway

  labels:
    app: istio-ingressgateway
    istio: ingressgateway
    pod-template-hash: "317985942"
    type: external

Internal Ingress Gateway

    app: istio-internalgateway
    istio: internalgateway
    pod-template-hash: "317985942"
    type: internal

Both gateways are running on the same namespace istio-system .. and when i create a Gateway rule to my application with the DNS, both external-dns instances do nothing, the logs of both stay like this

time="2019-03-08T19:14:27Z" level=info msg="All records are already up to date"
time="2019-03-08T19:15:28Z" level=info msg="All records are already up to date"
time="2019-03-08T19:16:28Z" level=info msg="All records are already up to date"
time="2019-03-08T19:17:27Z" level=info msg="All records are already up to date"
time="2019-03-08T19:18:27Z" level=info msg="All records are already up to date"

For reference, there's a dummy gateway rule that i'm trying to create

kind: Gateway
metadata:
  annotations:
  name: test-istio-internal
  namespace: default
spec:
  selector:
    istio: internalgateway
    type: internal
  servers:
  - hosts:
    - test-internal.example.com
    port:
      name: http
      number: 80
      protocol: HTTP
LorbusChris commented 5 years ago

@luizferreira094 can you try using an annotation instead of a label on your Gateway object? Also, I don't think labeling the IngressGateway objects is necessary.

luizferreira094 commented 5 years ago

@LorbusChris that's it... the annotation-filter from external-dns refers to the application gateway annotation, now it's working, thanks! For reference, that's how the Gateway looks like now:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  annotations:
    istio-type: internal
  name: test-istio-internal
  namespace: default
spec:
  selector:
    istio: internalgateway
  servers:
  - hosts:
    - test-internal.example.com
    port:
      name: http
      number: 80
      protocol: HTTP
LorbusChris commented 5 years ago

@bradenwright It seems this issue has been solved. Close?

bradenwright commented 5 years ago

@LorbusChris I think this is still an issue. There is a work around, but it involves deploying multiple versions of the external-dns chart, further the behavior deviates from how the external-dns chart works elsewhere. When I used it a single install with multiple ingresses, it published the proper ip to proper hostname. And I think I saw that --namespace flag got removed now that multiple gateways are supported it didn't really make sense.

Now that multiple istio gateways are supported when I added 2 gateways:

Containers:
  external-dns:
    Container ID:  docker://49e1f0db4316267a5ff8bcd363359c160b8aee942e680a48cea55cb5ad3c0d09
    Image:         registry.opensource.zalan.do/teapot/external-dns:v0.5.12
    Image ID:      docker-pullable://registry.opensource.zalan.do/teapot/external-dns@sha256:f395ba72e53d9c1e8851461af82da3bd215240bacddde3e7843058dc5cea76c2
    Port:          7979/TCP
    Host Port:     0/TCP
    Args:
      --log-level=info
      --policy=upsert-only
      --provider=google
      --registry=txt
      --interval=1m
      --txt-owner-id=sandbox
      --source=service
      --source=ingress
      --source=istio-gateway
      --istio-ingress-gateway=istio-system/istio-gateway-private
      --istio-ingress-gateway=istio-system/istio-gateway-public

Lets make up some ips for the Gatway's Services (type LB): istio-gateway-private: 10.0.0.1 istio-gateway-public: 35.0.0.1

I have 2 dns entries one on each gateway... lets call them private.example.com and public.example.com

Both entries get both dns entries so: private.example.com A 10.0.0.1, 35.0.0.1 public.example.com A 10.0.0.1, 35.0.0.1

I don't think this is what anyone would want or expect, so for now I still need to use the annotation (think I forgot to say thanks @julientyro for that) and have currently 2 deploys of the dns chart, pretty sure I need to deploy a third gateway so that will make it 3 copies of the dns chart.

Anyways my 2 cents is this should still be addressed.

LorbusChris commented 5 years ago

@bradenwright The namespace flag has not been removed afaict. Namespace and Istio Ingress Gateway(s) are orthogonal configurations here. I agree with you that the behaviour does not feel intuitive. However the flag is being respected, so maybe change up the title of this issue?

bradenwright commented 5 years ago

@LorbusChris actually what I was thinking of with namespace is istioNamespace and it was actually your comment :)

https://github.com/kubernetes-incubator/external-dns/pull/907#issuecomment-473313973

Anyways I updated the title, its wordy b/c I really didn't know how to describe it concisely. Open to suggestions.

pawelprazak commented 5 years ago

"Multiple istio ingress gateway services not working as expected" - title suggestion

I can confirm the issue, expected behaviour is to map service load balancer URI to the istio gateway instance used.

nesl247 commented 5 years ago

Running into this as well. We have the default (public) and secondary (private) istio gateways. When external-dns creates it's DNS entry, it includes both the default and secondary istio gateway IPs. It should only include the IPs from whatever gateways are specified on the Gateway resources via the spec.selector object.

crhuber commented 5 years ago

@LorbusChris

We also seeing the same behaviour but for a different use case. We have multiple gateway services ie: istio-ingressgateway-a and istio-ingressgateway-b and would like the dns records to point to the appropriate gateway service as specified in the Gateway resource spec.selector

We have applied the following args

-  --istio-ingress-gateway=istio-system/istio-ingressgateway-a
-  --istio-ingress-gateway=istio-system/istio-ingressgateway-b

We have multiple Gateway objects

kind: Gateway
metadata:
  name: foo
  namespace: foo
spec:
  selector:
    istio: istio-ingressgateway-a
---
kind: Gateway
metadata:
  name: bar
  namespace: bar
spec:
  selector:
    istio: istio-ingressgateway-b

At this point we are seeing the dns record for Gateway bar pointing to the Gateway Controller/Service istio-ingressgateway-a or vice-versa. The records seem to be created at random to the multiple gateway controllers

LorbusChris commented 5 years ago

@crhuber the way it currently works is that all ingressgateways specified on the flags are added to the DNS entries created for all the services this external-dns instance manages. So in order to make it work, you'll have to start one external-dns instance per ingressgateway and filter the gateways for each of them with labels, as shown somewhere above.

LorbusChris commented 5 years ago

what @nesl247 proposes above does sound reasonable to me. Instead of passing a flag for each ingressgateway load balancer, we should really just inspect the Gateway object, and add all ingressgateways specified in its spec.selector.

crhuber commented 5 years ago

@LorbusChris I can make an attempt at making PR to address @nesl247 proposes which will inspect the Gateway object and add all the ingressgateways specified in its spec.selector if we are all in agreement

pmcfadden commented 5 years ago

Anyone else getting level=error msg="\"!=\" is not a valid label selector operator when specifying an annotation filter like --annotation-filter=type!=private ?

nesl247 commented 5 years ago

Yep. I had to do a not in (values, here) instead, which I really don't like.

thiago commented 5 years ago

I have the same behaviour reported by https://github.com/kubernetes-incubator/external-dns/issues/884#issuecomment-509201384. I have two ingressgateway with two gateways, each one gateway point to one ingressgateway and the external-dns create dns records to same LB.

thiago commented 5 years ago

It worked for me creating two external-dns with --annotation-filter like:

Private external-dns

- --istio-ingress-gateway=istio-system/istio-ingressgateway
- --annotation-filter=type notin (public)

Private gateway (without annotation because I'm filtering with type notin (public))

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: private-test
spec:
  selector:
    istio: ingressgateway

Public external-dns

- --istio-ingress-gateway=istio-system/istio-ingressgateway-public
- --annotation-filter=type=public

Public gateway (with annotation type: public)

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-test
  annotations:
    type: public
spec:
  selector:
    istio: ingressgateway-public
fernandocarletti commented 5 years ago

I'm not sure if Github notifies when an issue is referenced, but I opened a PR earlier today (#1154) that does exactly what @crhuber described here.

chr15murray commented 5 years ago

I've been having the same issues described above with external-dns and multiple Istio ingress-gateways causing incorrect A records pointing at each.

There are already a couple of workarounds above but I've found another which works for me and may be useful for others. It only requires a single deployment of external-dns.

TL;DR

And in a bit more detail.....

Run external-dns with the following args.

...
 - args:
    ....
    - --registry=txt
    - --txt-prefix=cname_
    - --crd-source-apiversion=externaldns.k8s.io/v1alpha1
    - --crd-source-kind=DNSEndpoint
    - --source=service
    - --source=crd
...

We use helm to deploy Istio (Doesn't everyone?) and on our ingress-gateway definition we then add the following to create the service annotations.

  istio-customig:
    enabled: true
    type: LoadBalancer
    serviceAnnotations:
      external-dns.alpha.kubernetes.io/hostname: <your A record hostname>
...

Then create your CNAME as you deploy your service...

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: {{ $key }}
spec:
  endpoints:
  - dnsName: <your CNAME record>
    recordTTL: 180
    recordType: CNAME
    targets:
    - <your A record hostname>

This seems to work well. We are using a helm chart for external-dns deployment so actually set these settings via...

...
sources:
- service
- ingress
- crd
rbac:
  create: true
metrics: 
  enabled: true
crd:
  create: true
  apiversion: externaldns.k8s.io/v1alpha1
  kind: DNSEndpoint
txtPrefix: cname_

If you don't use this ensure your RBAC is update correctly. This is also a useful reference.... https://github.com/kubernetes-incubator/external-dns/blob/master/docs/contributing/crd-source.md

Hope this is useful

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

LorbusChris commented 4 years ago

I believe this was fixed by https://github.com/kubernetes-sigs/external-dns/pull/1328 /close

k8s-ci-robot commented 4 years ago

@LorbusChris: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/external-dns/issues/884#issuecomment-575194017): >I believe this was fixed by https://github.com/kubernetes-sigs/external-dns/pull/1328 >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.