Kong / kong-dist-kubernetes

Kubernetes managed Kong cluster
255 stars 136 forks source link

unsuccessful deployment at my kubernetes cluster if kong dns_resolver is consul #60

Closed vTNT closed 6 years ago

vTNT commented 6 years ago

images: kong 0.14.0-centos my deployment file:

apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-proxy
    port: 8000
    targetPort: 8000
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-proxy-ssl
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-proxy-ssl
    port: 8443
    targetPort: 8443
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-admin
    port: 8001
    targetPort: 8001
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin-ssl
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-admin-ssl
    port: 8444
    targetPort: 8444
    protocol: TCP
  selector:
    app: kong

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kong-rc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kong-rc
        app: kong
    spec:
      containers:
        - name: consul-client
          image: "consul:1.2.2"
          args:
            - "agent"
            - "-advertise=$(PODIP)"
            - "-bind=0.0.0.0"
            - "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
            - "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
            - "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
            - "-client=0.0.0.0"
            - "-datacenter=dc1"
            - "-data-dir=/consul/data"
            - "-domain=cluster.local"
          env:
            - name: PODIP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - consul leave
          resources:
            limits:
              cpu: "200m"
              memory: 512Mi
            requests:
              cpu: "100m"
              memory: 256Mi
          ports:
            - containerPort: 8500
              name: ui-port
            - containerPort: 8400
              name: alt-port
            - containerPort: 53
              name: udp-port
            - containerPort: 8443
              name: https-port
            - containerPort: 8080
              name: http-port
            - containerPort: 8301
              name: serflan
            - containerPort: 8302
              name: serfwan
            - containerPort: 8600
              name: consuldns
            - containerPort: 8300
              name: server
        - name: kong
          image: "kong:0.14.0-centos"
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add:
              - SYS_MODULE
              - NET_ADMIN
              - SYS_ADMIN
          env:
            - name: KONG_ADMIN_LISTEN
              value: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
            - name: KONG_DATABASE
              value: postgres
            - name: KONG_PG_USER
              value: kong
            - name: KONG_PG_DATABASE
              value: kong
            - name: KONG_PG_PASSWORD
              value: kong
            - name: KONG_PG_HOST
              value: postgres
            - name: KONG_PROXY_ACCESS_LOG
              value: "/var/log/proxy_access.log"
            - name: KONG_ADMIN_ACCESS_LOG
              value: "/var/log/admin_access.log"
            - name: KONG_PROXY_ERROR_LOG
              value: "/var/log/proxy_error.log"
            - name: KONG_ADMIN_ERROR_LOG
              value: "/var/log/admin_error.log"
            - name: KONG_DNS_RESOLVER
              value: "127.0.0.1:8600"
            - name: KONG_DNSMASQ
              value: "off"
          resources:
            limits:
              cpu: "2"
              memory: 8G
            requests:
              cpu: "2"
              memory: 6G
          ports:
            - name: admin
              containerPort: 8001
              protocol: TCP
            - name: proxy
              containerPort: 8000
              protocol: TCP
            - name: proxy-ssl
              containerPort: 8443
              protocol: TCP
            - name: admin-ssl
              containerPort: 8444
              protocol: TCP

when i run:

kubectl create -f kong.yaml

it show that pod can run successful. but when i logon into the pod, i will find some error in KONG_PROXY_ERROR_LOG.

2018/08/20 06:46:25 [notice] 1#0: start worker process 86
2018/08/20 06:46:26 [crit] 85#0: *26 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 72#0: *72 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 62#0: *104 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 82#0: *109 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 71#0: *64 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 63#0: *46 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 74#0: *116 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 68#0: *55 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 55#0: *89 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 60#0: *49 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 67#0: *54 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 70#0: *66 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 64#0: *53 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 77#0: *120 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 75#0: *52 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 69#0: *71 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 81#0: *117 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 57#0: *58 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:28 [crit] 65#0: *65 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:28 [crit] 84#0: *61 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:28 [crit] 66#0: *69 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:29 [crit] 86#0: *70 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:29 [crit] 58#0: *77 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:30 [crit] 61#0: *80 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:30 [crit] 56#0: *86 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:31 [crit] 79#0: *85 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:31 [crit] 78#0: *95 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:31 [crit] 83#0: *98 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 59#0: *90 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 80#0: *93 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 76#0: *94 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 73#0: *99 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
vTNT commented 6 years ago

and then if i didn't set the KONG_DNS_RESOLVER. All services are working well; but i need use consul to be our dns resolver, so how can i solve this problem??

vTNT commented 6 years ago

this is my solution:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kong-rc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kong-rc
        app: kong
    spec:
      containers:
        - name: consul-client
          image: "consul:1.2.2"
          args:
            - "agent"
            - "-advertise=$(PODIP)"
            - "-bind=0.0.0.0"
            - "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
            - "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
            - "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
            - "-client=0.0.0.0"
            - "-datacenter=dc1"
            - "-data-dir=/consul/data"
          env:
            - name: PODIP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - consul leave
          resources:
            limits:
              cpu: "200m"
              memory: 512Mi
            requests:
              cpu: "100m"
              memory: 256Mi
          ports:
            - containerPort: 8500
              name: ui-port
            - containerPort: 8400
              name: alt-port
            - containerPort: 53
              name: udp-port
            - containerPort: 8443
              name: https-port
            - containerPort: 8080
              name: http-port
            - containerPort: 8301
              name: serflan
            - containerPort: 8302
              name: serfwan
            - containerPort: 8600
              name: consuldns
            - containerPort: 8300
              name: server
        - name: dnsmasq
          image: "andyshinn/dnsmasq:2.76"
          imagePullPolicy: Always
          args:
            - "-S"
            - "/consul/127.0.0.1#8600"
          securityContext:
            capabilities:
              add:
              - NET_ADMIN
          resources:
            limits:
              cpu: "200m"
              memory: 512Mi
            requests:
              cpu: "100m"
              memory: 256Mi
          ports:
            - containerPort: 53
              name: tcp-port
              protocol: TCP
            - containerPort: 53
              name: udp-port
              protocol: UDP
        - name: kong
          image: "kong:0.14.0-centos"
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add:
              - SYS_MODULE
              - NET_ADMIN
              - SYS_ADMIN
          env:
            - name: KONG_ADMIN_LISTEN
              value: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
            - name: KONG_DATABASE
              value: postgres
            - name: KONG_PG_USER
              value: kong
            - name: KONG_PG_DATABASE
              value: kong
            - name: KONG_PG_PASSWORD
              value: kong
            - name: KONG_PG_HOST
              value: postgres-0.postgres.gslb-qa.svc.cluster.local
            - name: KONG_PROXY_ACCESS_LOG
              value: "/var/log/proxy_access.log"
            - name: KONG_ADMIN_ACCESS_LOG
              value: "/var/log/admin_access.log"
            - name: KONG_PROXY_ERROR_LOG
              value: "/var/log/proxy_error.log"
            - name: KONG_ADMIN_ERROR_LOG
              value: "/var/log/admin_error.log"
            - name: KONG_DNS_RESOLVER
              value: "127.0.0.1:53"
            - name: KONG_DNSMASQ
              value: "off"
          resources:
            limits:
              cpu: "2"
              memory: 8G
            requests:
              cpu: "2"
              memory: 6G
          ports:
            - name: admin
              containerPort: 8001
              protocol: TCP
            - name: proxy
              containerPort: 8000
              protocol: TCP
            - name: proxy-ssl
              containerPort: 8443
              protocol: TCP
            - name: admin-ssl
              containerPort: 8444
              protocol: TCP

It should be noted that if your consul 's domain is cluster.local, you should modify it, Otherwise dnsmasq won't work