kubernetes-sigs / external-dns

Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
Apache License 2.0
7.64k stars 2.55k forks source link

Add DNS entry for Endpoint IP (if not using type loadbalancer) #187

Closed evaldasou closed 7 years ago

evaldasou commented 7 years ago

Hey Guys,

Thanks for a great tool. However, is it possible to get DNS entries updated with Internal IPs? Or with Endpoints IPs? I do not want to expose service to the internet, so type loadbalancer is not ideal for this.

Thanks!

hjacobs commented 7 years ago

AFAIK this should "just work" for service and ingress as long as the Kubernetes field status.loadBalancer.ingress is properly populated: External DNS only treats hostnames in a special way (no A record possible, also check for AWS ELB hosted zone), all other IPs are just used as-is. Even a local IP like 127.0.0.1 should work (but would not make sense).

There is no special check for service type "LoadBalancer" as you can see in https://github.com/kubernetes-incubator/external-dns/blob/master/source/service.go .

Maybe you can describe your use case in more detail? I'm not entirely sure what you want to achieve.

evaldasou commented 7 years ago

Hey @hjacobs , thanks for quick response!

I deploy my service like this :

kubectl run nginx --image=nginx --replicas=1 --port=80
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
kubectl annotate service nginx "external-dns.alpha.kubernetes.io/hostname=nginx1.test.net" 

However, this creates a LoadBalancer with External IP address. I would like to expose deployment, without using --type=LoadBalancer command ... and get Endpoint IP populated in my DNS zone. Like this :

kubectl expose deployment nginx --port=80 --target-port=80

kubectl describe service nginx
Name:                   nginx
Namespace:              default
Labels:                 run=nginx
Annotations:            <none>
Selector:               run=nginx
Type:                   ClusterIP
IP:                     10.111.253.237
Port:                   <unset> 80/TCP
**Endpoints:              10.108.2.78:80**

I want this IP : 10.108.2.78 in my DNS zone configuration :)

hjacobs commented 7 years ago

@evaldasou hmm, why do you want to expose the internal endpoint IPs in public DNS? Also why are you talking about endpoint IPs and not the ClusterIP of the service (10.111.253.237 in your example)? The service might have an "unlimited" number of endpoints --- would you expect to have load balancing on the DNS side for all those IP (DNS round robin)? FYI: Inside the cluster you will get a DNS entry for the ClusterIP "out of the box" via kube-dns (you can just do "curl http://nginx/" from some pod).

I still don't get your use case, maybe you can elaborate...

evaldasou commented 7 years ago

Sure.

So first, ClusterIP is only reachable from within the cluster... If I could connect to it from outside of the cluster - it's all good! I want to access my resources from Internal only/ VPN network via DNS names.

I agree that Endpoints IP makes no sense for multiple Endpoints too, but they are reachable at least outside of the cluster (not as ClusterIP).

So I want my services to be reachable only via Internal IPs (not via internet). It could be cluster IP or endpoint IP. LoadBalancer does not work with Internal IPs as I know, and also, I cannot limit access to LoadBalancer via firewall rules. I'm using Google Cloud Platform , and it's possible to configure firewall rules for instances, not loadbalancers.

Thanks!

jrnt30 commented 7 years ago

We actually have exactly the same situation. We run PriTunl via a LoadBalancer service but the other services we want to expose via the VPN connection, not through an ELB that we need to manage and deal with.

@evaldasou What we are in the process of doing currently is standing up an internal ELB that fronts the nginx-ingress controller and publish the Services as an Ingress object. This keeps the DNS records internal and doesn't expose them to the world ever. We then are publishing these to an internal DNS hosted zone that is resolvable via the PriTunl VPN connection that is running in the VPN itself. I see you're on GCE so I'm not sure if that would help as I'm not all that familiar with running/exposing a service like the nginx-ingress controller on GCE without exposing it.

linki commented 7 years ago

@evaldasou did you have a look at Headless Services? KubeDNS will serve A records for each pod belonging to a headless service.

In your example above this would lead to something like this, I believe:


$ dig @kubednsIP nginx.default.svc.cluster.local
10.108.2.78     <== pod IP
...
jrnt30 commented 7 years ago

That works, but it can be handy to have an abstraction over that that is "nicer" for end users that remains consistent and abstracts over the different namespaces. Our end users want something like redis.dev.vpn and redis.stg.vpn but we want the flexibility of potentially deploying stg and dev in the same k8 cluster but different namespaces or in completely different clusters.

linki commented 7 years ago

I see, that makes sense. I created an issue as well.

evaldasou commented 7 years ago

thanks a lot guys! really appreciate Your time and effort! looks promising! :+1:

jrnt30 commented 7 years ago

@linki One thing I guess I should have mentioned with the Headless Service comment explicitly is that currently the extneral-dns doesn't support this, as it doesn't have the svc.Status.LoadBalancer.Ingress populated.

I started some work to support the ClusterIP service type as well, which I personally think is more useful than relying on the PodSpec IP, but perhaps I'm missing another use case.

I started some work on this @ https://github.com/kubernetes-incubator/external-dns/compare/master...jrnt30:clusterip-sources

evaldasou commented 7 years ago

hey @jrnt30 ! It's a great fix that You have added ClusterIP - I have tested it works! However, ClusterIP is reachable only inside Kubernetes Cluster! Why not to add EndpointIP too ? So we could reach Kubernetes resources directly by DNS name from outside of the Cluster? :)

jrnt30 commented 7 years ago

I'm glad to see that it's working for you as well. Little context, a few questions and a direct answer to your question.

Context: We run our VPN directly in the the cluster itself and expose the VPN server as a LoadBalancer. When our users VPN in, since the VPN server is sitting in the cluster and we have it configured to "own" that CIDR block and domain for the associated hosted zone, our users are able to use the external-dns managed entry to resolve and access those "internal" services.

We went this route due to some limitations we saw with the Ingress controller's ability to map arbitrary protocols/ports and a few other things I can't recall immediately.

Questions: I'm unfamiliar with some alternate deployment techniques, but aren't the endpoints you via the kubectl get endpionts <svcname> similarly "private" and unrouteable? In my case, these IPs are the in-cluster IPs of the various pods and would be unreachable if not "inside" of the K8 cluster itself (as our VPN server is).

Can you provide a bit more information about what you are attempting to expose and what IPs the endpoint vs. service actually exposes?

Answer

evaldasou commented 7 years ago

thanks @jrnt30 Actually all good with ClusterIP - I have configured it to work by changing routing configuration. However, I want to ask, as I have tested --publish-internal-services works only with "ClusterIP" services. Can we make it work with Type : "NodePort" too?

jrnt30 commented 7 years ago

That too will require the multiple target support as well, however we could create an issue to cover some of those.

evaldasou commented 7 years ago

@jrnt30 , NodePort can work with single target too, for example it looks like this on my service :

root:evaldas# kubectl describe svc dev-http
Name:           dev-http
Namespace:      default
Labels:         <none>
Annotations:        external-dns.alpha.kubernetes.io/hostname=dev.evaldas.net.
Selector:       app=nifi
Type:           NodePort
**IP:           10.111.240.79**
Port:           nifi-http   80/TCP
NodePort:       nifi-http   31846/TCP
Endpoints:      10.108.3.65:80,10.108.4.41:80
Session Affinity:   None
Events:         <none>

IP is the same as ClusterIP and could be exposed in this case.

nrobert13 commented 6 years ago

@evaldasou , how did you get it to work with Endpoints? I've create a headless ClusterIP service as follows:

$ kubectl -n ingress describe service nginx-ingress-service 
Name:           nginx-ingress-service
Namespace:      ingress
Labels:         k8s-svc=nginx-ingress-service
Annotations:        <none>
Selector:       pod=nginx-ingress-lb
Type:           ClusterIP
IP:         None
Port:           http    80/TCP
Endpoints:      10.68.69.75:80,10.68.74.204:80,10.68.76.75:80 + 2 more...
Port:           https   443/TCP
Endpoints:      10.68.69.75:443,10.68.74.204:443,10.68.76.75:443 + 2 more...
Session Affinity:   None
Events:         <none>

running external-dns with the following flags: --source=service --publish-internal-services --domain-filter=prod.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-prod --log-level=debug

but external-dns doesn't find anything to export ( marked the service in question with stars ):

DEBU[0002] No endpoints could be generated from service default/kubernetes 
DEBU[0002] No endpoints could be generated from service ingress/default-http-backend 
**DEBU[0002] No endpoints could be generated from service ingress/nginx-ingress-service** 
DEBU[0002] No endpoints could be generated from service kube-system/heapster 
DEBU[0002] No endpoints could be generated from service kube-system/kube-controller-manager-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-scheduler-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kubelet 
DEBU[0002] No endpoints could be generated from service kube-system/kubernetes-dashboard 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-grafana 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-influxdb 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-main 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-operated 
DEBU[0002] No endpoints could be generated from service monitoring/grafana 
DEBU[0002] No endpoints could be generated from service monitoring/kube-state-metrics 
DEBU[0002] No endpoints could be generated from service monitoring/node-exporter 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-k8s 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operated 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operator 
ekoome commented 6 years ago

I would like to update external-dns with a node's PUBLIC IP as a deployment needs to use host networking and uses the hosts external IP. As suggested above how do I set status.loadBalancer.ingress with the external IP so that it can be picked up with external-dns?

rhangelxs commented 6 years ago

Vote for external-dns can use node public ip (ephemeral or static in GCE terms).