ori-edge / k8s_gateway

A CoreDNS plugin to resolve all types of external Kubernetes resources
Apache License 2.0
316 stars 65 forks source link

Question : Can it be used for cicd ? #31

Open survivant opened 3 years ago

survivant commented 3 years ago

here my usecase. I want to deploy my applications on multiple namespaces

my services could look like that

chuck-service:8080 quote-service:8080

I'm on premise with nginx-ingress and with metallb as loadbalancer

I'll expose nginx-controler as daemonset with a external IP : 10.1.10.123

for ingress

/chuck -> chuck-service:8080 /quote -> quote-service:8080

I want to have those applications to be accessible for from outside on 10.1.10.123 (I can't expose new IP)

the domain (inside-my-company.com) name it not register in external DNS

ex : dev.inside-my-company.com/chuck dev.inside-my-company.com/quote

qa.inside-my-company.com/chuck qa.inside-my-company.com/quote

....

networkop commented 3 years ago

yeah, that should work out of the box. As soon as you define an Ingress, this plugin will return the IP of your ingress controller for every unique hostname of your Ingresses.

survivant commented 3 years ago

How can I do that ? For now I only have one version and it’s in the default namespace. like

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: chuck
  annotations:
    # this important https://docs.konghq.com/kubernetes-ingress-controller/1.3.x/references/annotations/#konghqcomstrip-path
    konghq.com/strip-path: "true"
spec:
  ingressClassName: kong
  rules:
    - http:
        paths:
          - path: /quote
            pathType: Prefix
            backend:
              service:
                name: reactive-quote-service
                port:
                  number: 8080
          - path: /chuck
            pathType: Prefix
            backend:
              service:
                name: chuck-quote-service
                port:
                  number: 8080

Here the list of my services. Kong is my ingress controller

vagrant@enroute-master:~$ kubectl get svc --all-namespaces
NAMESPACE     NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
default       chuck-quote-service       ClusterIP      10.110.149.200   <none>           8080/TCP                     12h
default       exdns-k8s-gateway         LoadBalancer   10.109.67.179    192.168.50.202   53:30186/UDP                 11h
default       kubernetes                ClusterIP      10.96.0.1        <none>           443/TCP                      13h
default       reactive-quote-service    ClusterIP      10.108.148.54    <none>           80/TCP                       12h
kong          kong-proxy                LoadBalancer   10.103.175.232   192.168.50.200   80:30092/TCP,443:31098/TCP   12h
kong          kong-validation-webhook   ClusterIP      10.110.25.248    <none>           443/TCP                      12h
kube-system   exdns-2-k8s-gateway       LoadBalancer   10.105.96.51     192.168.50.203   53:30389/UDP                 11h
kube-system   ext-dns-tcp               LoadBalancer   10.111.101.102   192.168.50.201   53:32759/TCP                 11h
kube-system   ext-dns-udp               LoadBalancer   10.110.14.237    192.168.50.201   53:31119/UDP                 11h
kube-system   kube-dns                  ClusterIP      10.96.0.10       <none>           53/UDP,53/TCP,9153/TCP       13h
test          chuck-quote-service       ClusterIP      10.109.213.53    <none>           8080/TCP                     12h
test          reactive-quote-service    ClusterIP      10.106.129.43    <none>           80/TCP                       12h
vagrant@enroute-master:~$

I have 2 applications

    chuck-quote-service
    reactive-quote-service

For those 2 applications, I want to deploy them in dev, qa… namespaces and modify the ingress rules for that.and I need to access those applications from outside my cluster, like

http://dev.example.org/chuck
http://qa.example.org/chuck

I’m looking to reproduce that setup on bare-metal with kubernetes 1.20 configured with kubeadm.

survivant commented 3 years ago

I try this but it didn't work

vagrant@enroute-master:~$ kubectl -n kube-system get cm coredns -o yaml
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        k8s_external k8s.home.mydomain.com
        k8s_gateway example.org

        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2021-06-18T22:44:15Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "10708"
  uid: 66bfaf2a-5288-495f-b036-58f271efda35

and I obtain this

vagrant@enroute-master:~$ curl http://dev.example.org/chuck
curl: (6) Could not resolve host: dev.example.org
networkop commented 3 years ago

first of all, the domain you specify in k8s_gateway configuration MUST match the subdomain of your ingresses. you can have multiple domains if you wish, but at least one of them should match. For example, you can have a configmap with: k8s_gateway k8s.home.mydomain.com k8s.work.mydomain.com then in your ingress spec, you must have either spec.rules[0].host: "foo.k8s.home.mydomain.com" or spec.rules[0].host: "foo.k8s.work.mydomain.com". As a side note, always make sure you explicitly set host in your ingress spec, don't leave if empty.

second, you must make sure your domain has been delegated correctly. what you're trying to do with example.org will never work since you don't own this domain(unless you modify your DNS resolver). Basically, you need to make sure that a DNS query ends up hitting 192.168.50.202.

the right way to test would be to:

  1. dig foo.k8s.work.mydomain.com @192.168.50.202 -> make sure it returns the IP of your ingress controller
  2. curl foo.k8s.work.mydomain.com
survivant commented 3 years ago

thanks. The final setup will be in a closed network. No internet. So to make it work, I need to create a entry on each local computer in /etc/hosts example.org -> 192.168.50.200

or add that entry into our company dns server.

  1. I should try with /etc/hosts locally (vagrant it's perfect for that),
  2. when it works locally, find a way to persuade the IT network team to add those entry

is it possible to setup a private DNS server in kubernetes, that I could use in vagrant, to simulate that it works. If you have a name of a opensource dns that could do that, just let me know.

and thanks again for your help and time

networkop commented 3 years ago

yes, you can use standard coredns for that. for example, you can use a file plugin to configure static entries including any zone delegation. https://coredns.io/plugins/file/

survivant commented 3 years ago

@networkop I made lot of progress this weekend.

I started from scratch.

I found a section on Kubernetes docs about "virtual host". We need to pass the "Host" header

here are my ingress in a namespace dev.

root@test-pcl4014:~# kubectl -n dev get ingress
NAME               CLASS    HOSTS                       ADDRESS      PORTS   AGE
gateway            <none>   dev.kubernetes.comact.com   10.1.34.55   80      176m
production-wui     <none>   dev.kubernetes.comact.com   10.1.34.55   80      174m
twin-api-service   <none>   dev.kubernetes.comact.com   10.1.34.55   80      13m
root@test-pcl4014:~#

if I want to call gateway endpoint, I have to do that

curl -I -H 'Host: dev.kubernetes.comact.com' http://10.1.34.55/gateway

my last problem is HOW to access the UI . When I use only one namespace and no host.. It's simple

http://10.1.34.55/ui 

but now, I could have the UI deployed in QA, DEV, staging... I need to find how to pass the header when I try to access to UI. Maybe I could have a different ingress for the UI. I put the prefix in the url like :

http://10.1.34.55/dev/ui 
http://10.1.34.55/qa/ui 
networkop commented 3 years ago

You pass the right header when it's in your URL. So curl http://dev.kubernetes.com would create the Host: dev.kubernetes.com header. You can certainly have a different path for each environment as in your last example but this is not the best way to do it. Ideally, you'd have different hosts for each ingress so that the output would look something like this :

root@test-pcl4014:~# kubectl -n dev get ingress
NAME               CLASS    HOSTS                       ADDRESS      PORTS   AGE
gateway            <none>   gw.kubernetes.comact.com   10.1.34.55   80      176m
production-wui     <none>   prod.kubernetes.comact.com   10.1.34.55   80      174m
twin-api-service   <none>   api.kubernetes.comact.com   10.1.34.55   80      13m
root@test-pcl4014:~#

Assuming you've got DNS zone delegation setup for kubernetes.compact.com and point it at the k8s_gateway IP, you should be able to do curl gw.kubernetes.comact.com and get to right backend.

survivant commented 3 years ago

The reason they all have the same host name it' because it will look like that

root@test-pcl4014:~# kubectl -n dev get ingress
NAME               CLASS    HOSTS                       ADDRESS      PORTS   AGE
gateway            <none>   dev.kubernetes.comact.com   10.1.34.55   80      176m
production-wui     <none>   dev.kubernetes.comact.com   10.1.34.55   80      174m
twin-api-service   <none>   dev.kubernetes.comact.com   10.1.34.55   80      13m
root@test-pcl4014:~#

root@test-pcl4014:~# kubectl -n qa get ingress
NAME               CLASS    HOSTS                       ADDRESS      PORTS   AGE
gateway            <none>   qa.kubernetes.comact.com   10.1.34.55   80      176m
production-wui     <none>   qa.kubernetes.comact.com   10.1.34.55   80      174m
twin-api-service   <none>   qa.kubernetes.comact.com   10.1.34.55   80      13m
root@test-pcl4014:~#

root@test-pcl4014:~# kubectl -n prod get ingress
NAME               CLASS    HOSTS                       ADDRESS      PORTS   AGE
gateway            <none>   prod.kubernetes.comact.com   10.1.34.55   80      176m
production-wui     <none>   prod.kubernetes.comact.com   10.1.34.55   80      174m
twin-api-service   <none>   prod.kubernetes.comact.com   10.1.34.55   80      13m
root@test-pcl4014:~#

and I don't have a dns server for now and I don't want to play with the hosts file on Windows on each computer. I think it could work like that for a "DEV" setup. and for production.. I'll have to come back to check how to setup the dns zone delegation.

for now, I'll add this entry in my /etc/hosts

10.1.34.55 dev.kubernetes.comact.com

and test from a browser and try with command line too :

 curl -H 'Host: dev.kubernetes.comact.com' http://10.1.34.55/twin-api-service/swagger-ui.html

thanks for your help. Hope that discussion will be able to help others

survivant commented 3 years ago

I think my next step is to install a DNS Server and automatically push the new domain names into that DNS Server.

I'm on ubuntu 20.04 if you have any suggestions.

there is a tutorial that I could follow ?

I'll willing to help to write one, but it's the first time playing with dns like that.

survivant commented 3 years ago

I followed this guide : https://www.linuxtechi.com/install-configure-bind-9-dns-server-ubuntu-debian/ and that one : https://www.linuxbabe.com/ubuntu/set-up-local-dns-resolver-ubuntu-20-04-bind9

from the first tutorial I replaced : linuxtechi.local by cluster114.local

node name : node114 and my node IP is : 10.1.34.14 my loadbalancer is : 10.1.34.55

I added on another node the DNS : 10.1.1.34.14 and the netplan

and I'm able to reach my domain name

from node4

curl http://dev.cluster114.local

and I received a response (host matched in ingress)

Now.. if I add a new ingress for a new host.. like qa.cluster114.local

can that information to forwarded to BIND9 dns server automatically ?

networkop commented 3 years ago

for any dynamic behaviour you need to delegate to k8s_gateway. By default a DNS server will have static configuration, that's not supposed to change much. So let's assume you want to delegate cluster114.local to your k8s cluster. First, you'd need to deploy k8s_gateway and get the IP that got assigned to it by a LB, e.g.g 10.1.34.55. Then your BIND zone file would looks like this ( I haven't actually tested this so there maybe errors)

...
k8s-ns1 IN  A   10.1.34.55  ; glue record
;
$ORIGIN cluster114.local.
$TTL 1D
@       IN  NS  k8s-ns1.cluster114.local.

Once you have the domain delegation set up, k8s_gateway will do the rest for you. It will resolve any domain under cluster114.local, e.g. qa.cluster114.local or dev.cluster114.local based on the current state IPs assigned to those ingresses in your cluster.

survivant commented 3 years ago

Here the procedure that I try to apply to add k8s_gateway to handle the domain names. It doesn't work so far.

I copied cluster114.local configuration to cluster111.local and try to redirect cluster111.local -> k8s_gateway IP.

I installed k8s_gateway

helm install exdns --set domain=cluster111.local k8s_gateway/k8s-gateway

Here are the list of my loadbalancer

NAMESPACE       NAME                                       TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
default         exdns-k8s-gateway                          LoadBalancer   10.100.101.189   10.1.34.56    53:31281/UDP                          25m
nginx-ingress   ingress-nginx-controller                   LoadBalancer   10.99.95.158     10.1.34.55    80:31224/TCP,443:31751/TCP            9d

Here are my 2 ingress

NAMESPACE   NAME                CLASS    HOSTS                  ADDRESS      PORTS   AGE
dev         production-wui      <none>   dev.cluster114.local   10.1.34.55   80      43h
qa          production-wui      <none>   qa.cluster111.local    10.1.34.55   80      69m

Here my ingress file for production-wui in qa namespace

root@test-pcl4014:/etc/bind# kubectl -n qa get ingress production-wui -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: production-wui
    meta.helm.sh/release-namespace: qa
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  creationTimestamp: "2021-07-01T13:28:56Z"
  generation: 2
  labels:
    app.kubernetes.io/managed-by: Helm
  name: production-wui
  namespace: qa
  resourceVersion: "17559724"
  uid: 2aa3b241-1499-4d5b-b0ae-f6908fc84b40
spec:
  rules:
  - host: qa.cluster111.local
    http:
      paths:
      - backend:
          service:
            name: production-wui
            port:
              number: 80
        path: /production(/|$)(.*)
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 10.1.34.55

If I try to connect to the applications (dev.cluster114.local works)

root@test-pcl4014:/etc/bind# curl http://dev.cluster114.local/production
<!doctype html><html><head><meta charset="utf-8"><script>window.publicPath = "/" + window.location.pathname.split("/")[1] + "/";

root@test-pcl4014:/etc/bind# !curl
curl http://qa.cluster111.local/production
curl: (6) Could not resolve host: qa.cluster111.local
root@test-pcl4014:/etc/bind#

My node information

root@test-pcl4014:/etc/bind# hostname -I
10.1.34.14 192.168.178.64

systemd-resolve --status
Global
       LLMNR setting: no
MulticastDNS setting: no
  DNSOverTLS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.1.34.14
Fallback DNS Servers: 10.1.1.191
          DNSSEC NTA: 10.in-addr.arpa
                      16.172.in-addr.arpa
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa
                      18.172.in-addr.arpa
                      19.172.in-addr.arpa
                      20.172.in-addr.arpa
                      21.172.in-addr.arpa
                      22.172.in-addr.arpa
                      23.172.in-addr.arpa
                      24.172.in-addr.arpa
                      25.172.in-addr.arpa
                      26.172.in-addr.arpa
                      27.172.in-addr.arpa
                      28.172.in-addr.arpa
                      29.172.in-addr.arpa
                      30.172.in-addr.arpa
                      31.172.in-addr.arpa
                      corp
                      d.f.ip6.arpa
                      home
                      internal
                      intranet
                      lan
                      local
                      private
                      test
...

BIND configuration

I modified the file : named.conf.local (I didn't include a reserve lookup for cluster111.local because I can't have 2 zones with the same name.. I didn't find how to fix that)

zone    "cluster114.local"   {
        type master;
        file    "/etc/bind/forward.cluster114.local";
 };

zone   "0.1.10.in-addr.arpa"        {
       type master;
       file    "/etc/bind/reverse.cluster114.local";
 };

zone    "cluster111.local"   {
        type master;
        file    "/etc/bind/forward.cluster111.local";
 };

Content of the file : forward.cluster114.local

$TTL    604800

@       IN      SOA     primary.cluster114.local. root.primary.cluster114.local. (
                              6         ; Serial
                         604820         ; Refresh
                          86600         ; Retry
                        2419600         ; Expire
                         604600 )       ; Negative Cache TTL

;Name Server Information
@       IN      NS      primary.cluster114.local.

;IP address of Your Domain Name Server(DNS)
primary IN       A      10.1.34.14

;Mail Server MX (Mail exchanger) Record
cluster114.local. IN  MX  10  mail.cluster114.local.

;A Record for Host names
www     IN       A       10.1.34.14
mail    IN       A       10.1.34.14
dev     IN       A       10.1.34.55

;CNAME Record
ftp     IN      CNAME    ftp.cluster114.local.

Content of the file : reverse.cluster114.local

$TTL    604800
@       IN      SOA     cluster114.local. root.cluster114.local. (
                             21         ; Serial
                         604820         ; Refresh
                          864500        ; Retry
                        2419270         ; Expire
                         604880 )       ; Negative Cache TTL

;Your Name Server Info
@       IN      NS      primary.cluster114.local.
primary IN      A       10.1.34.14

;Reverse Lookup for Your DNS Server
14      IN      PTR     primary.cluster114.local.

;PTR Record IP address to HostName
14      IN      PTR     www.cluster114.local.
14      IN      PTR     mail.cluster114.local.
55      IN      PTR     dev.cluster114.local.

Here it's the content of forward.cluster111.local

k8s-ns1 IN  A   10.1.34.56  ; glue record
;
$ORIGIN cluster111.local.
$TTL 1D
@       IN  NS  k8s-ns1.cluster111.local.
root@test-pcl4014:/etc/bind# named-checkzone cluster111.local /etc/bind/forward.cluster111.local
/etc/bind/forward.cluster111.local:1: no TTL specified; using SOA MINTTL instead
zone cluster111.local/IN: loaded serial 6
OK

What I missed ?

Here k8s_gateway logs

root@test-pcl4014:/etc/bind# kubectl logs exdns-k8s-gateway-777458bf55-p2dq6 k8s-gateway
[INFO] plugin/k8s_gateway: Starting k8s_gateway controller
.:53
[INFO] 127.0.0.1:35549 - 63521 "HINFO IN 6508459265347430793.4264319167340456566. udp 57 false 512" NOERROR - 0 0.000543856s
[ERROR] plugin/errors: 2 6508459265347430793.4264319167340456566. HINFO: plugin/loop: no next plugin found
[INFO] plugin/reload: Running configuration MD5 = 7c51ed2244d42192ca2bf31543bdeed8
CoreDNS-1.8.0
linux/amd64, go1.14.4, 7fbc4aa
W0701 14:27:16.464284       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0701 14:27:16.468883       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
[INFO] plugin/k8s_gateway: Synced all required resources
W0701 14:36:58.470872       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0701 14:44:05.472457       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0701 14:49:24.475589       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0701 14:55:10.477756       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0701 15:01:40.480289       1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
root@test-pcl4014:/etc/bind#

The generated config map look like this

root@test-pcl4014:/etc/bind# kubectl get cm exdns-k8s-gateway -o yaml
apiVersion: v1
data:
  Corefile: |-
    .:53 {
        errors
        log
        health {
            lameduck 5s
        }
        ready
        k8s_gateway "cluster111.local" {
          apex exdns-k8s-gateway.default
          ttl 300
        }
        prometheus 0.0.0.0:9153
        loop
        reload
        loadbalance
    }
kind: ConfigMap

I changed to configmap and did a forward, like in the example, but it didn't work, but the Loop error is not present.

I also active Bind logs and did 2 query. One for cluster114 and one for cluster111

Jul 01 12:00:42 test-pcl4014 named[857317]: client @0x7fc0b4000cd0 10.1.34.14#51893 (dev.cluster114.local): query: dev.cluster114.local IN A +E(0) (10.1.34.14)
Jul 01 12:00:42 test-pcl4014 named[857317]: client @0x7fc0b4004fb0 10.1.34.14#51893 (dev.cluster114.local): query: dev.cluster114.local IN AAAA +E(0) (10.1.34.14)
Jul 01 12:00:46 test-pcl4014 named[857317]: client @0x7fc0a4000cd0 10.1.34.14#47192 (test-pcl4014): query: test-pcl4014 IN A +E(0) (10.1.34.14)
Jul 01 12:00:46 test-pcl4014 named[857317]: client @0x7fc0a4004fb0 10.1.34.14#47192 (test-pcl4014): query: test-pcl4014 IN AAAA +E(0) (10.1.34.14)
Jul 01 12:00:51 test-pcl4014 named[857317]: client @0x7fc0c4000cd0 10.1.34.14#42113 (qa.cluster111.local): query: qa.cluster111.local IN A +E(0) (10.1.34.14)
Jul 01 12:00:51 test-pcl4014 named[857317]: client @0x7fc0c4004fb0 10.1.34.14#42113 (qa.cluster111.local): query: qa.cluster111.local IN AAAA +E(0) (10.1.34.14)

root@test-pcl4014:/etc/bind# nslookup dev.cluster114.local
Server:         10.1.34.14
Address:        10.1.34.14#53

Name:   dev.cluster114.local
Address: 10.1.34.55

root@test-pcl4014:/etc/bind# nslookup qa.cluster111.local
Server:         10.1.34.14
Address:        10.1.34.14#53

** server can't find qa.cluster111.local: NXDOMAIN

root@test-pcl4014:/etc/bind#
survivant commented 3 years ago

I try with dig to see the difference

root@test-pcl4014:/etc/bind# dig @10.1.34.14 dev.cluster114.local

; <<>> DiG 9.16.1-Ubuntu <<>> @10.1.34.14 dev.cluster114.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 65076
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 5b9bc7e435a321ec0100000060ddf56a489acaa159b77bef (good)
;; QUESTION SECTION:
;dev.cluster114.local.          IN      A

;; ANSWER SECTION:
dev.cluster114.local.   604800  IN      A       10.1.34.55

;; Query time: 0 msec
;; SERVER: 10.1.34.14#53(10.1.34.14)
;; WHEN: Thu Jul 01 13:03:38 EDT 2021
;; MSG SIZE  rcvd: 93

root@test-pcl4014:/etc/bind# dig @10.1.34.14 qa.cluster111.local

; <<>> DiG 9.16.1-Ubuntu <<>> @10.1.34.14 qa.cluster111.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 713
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: f5413854d68878e00100000060ddf570b2552d94013a71b9 (good)
;; QUESTION SECTION:
;qa.cluster111.local.           IN      A

;; AUTHORITY SECTION:
cluster111.local.       604600  IN      SOA     k8s-ns1.cluster111.local. root.k8s-ns1.cluster111.local. 6 604820 86600 2419600 604600

;; Query time: 0 msec
;; SERVER: 10.1.34.14#53(10.1.34.14)
;; WHEN: Thu Jul 01 13:03:44 EDT 2021
;; MSG SIZE  rcvd: 125

root@test-pcl4014:/etc/bind#
survivant commented 3 years ago

I played a little bit

root@test-pcl4014:/etc/bind# cat forward.cluster111.local
@       IN      SOA     k8s-ns1.cluster111.local. root.k8s-ns1.cluster111.local. (
                              6         ; Serial
                         604820         ; Refresh
                          86600         ; Retry
                        2419600         ; Expire
                         604600 )       ; Negative Cache TTL

k8s-ns1 IN      A       10.1.34.56      ; glue record
;

$ORIGIN cluster111.local.
$TTL 1D
@               IN  NS  k8s-ns1.cluster111.local.
root@test-pcl4014:/etc/bind#

root@test-pcl4014:~# dig @10.1.34.14 k8s-ns1.cluster111.local

; <<>> DiG 9.16.1-Ubuntu <<>> @10.1.34.14 k8s-ns1.cluster111.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47043
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 2c6ba882c2291cec0100000060ddf7fb7d85c2629674db29 (good)
;; QUESTION SECTION:
;k8s-ns1.cluster111.local.      IN      A

;; ANSWER SECTION:
k8s-ns1.cluster111.local. 604600 IN     A       10.1.34.56

;; Query time: 0 msec
;; SERVER: 10.1.34.14#53(10.1.34.14)
;; WHEN: Thu Jul 01 13:14:35 EDT 2021
;; MSG SIZE  rcvd: 97

root@test-pcl4014:~#

root@test-pcl4014:~# dig @10.1.34.14 qa.k8s-ns1.cluster111.local

; <<>> DiG 9.16.1-Ubuntu <<>> @10.1.34.14 qa.k8s-ns1.cluster111.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 52433
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 8b01682989cbd8ac0100000060ddf855c62cd790ba6e3eb0 (good)
;; QUESTION SECTION:
;qa.k8s-ns1.cluster111.local.   IN      A

;; AUTHORITY SECTION:
cluster111.local.       604600  IN      SOA     k8s-ns1.cluster111.local. root.k8s-ns1.cluster111.local. 6 604820 86600 2419600 604600

;; Query time: 0 msec
;; SERVER: 10.1.34.14#53(10.1.34.14)
;; WHEN: Thu Jul 01 13:16:05 EDT 2021
;; MSG SIZE  rcvd: 141

root@test-pcl4014:~#

I added a new host in ingress, and the domain started with .qa are still not resolved.

root@test-pcl4014:~# kubectl -n qa get ingress
NAME             CLASS    HOSTS                                             ADDRESS      PORTS   AGE
production-wui   <none>   qa.cluster111.local,qa.ks8-ns1.cluster111.local   10.1.34.55   80      3h47m
root@test-pcl4014:~#

root@test-pcl4014:~# curl http://k8s-ns1.cluster111.local/production
curl: (7) Failed to connect to k8s-ns1.cluster111.local port 80: No route to host

root@test-pcl4014:~# curl http://qa.k8s-ns1.cluster111.local/production
curl: (6) Could not resolve host: qa.k8s-ns1.cluster111.local
root@test-pcl4014:~#
networkop commented 3 years ago

I think you've misconfigured you BIND. You need to configure any zone delegation in the parent zone, which in your case is .local:

$ORIGIN local.
...
k8s-ns1 IN  A   10.1.34.55  ; glue record
;
$ORIGIN cluster114.local.
$TTL 1D
@       IN  NS  k8s-ns1.cluster114.local.

What you've done instead is defined the cluster114.local inside BIND. you can see that by doing dig +trace and you'll see that your query never gets to k8s_gateway