Closed vTNT closed 6 years ago
and then if i didn't set the KONG_DNS_RESOLVER. All services are working well; but i need use consul to be our dns resolver, so how can i solve this problem??
this is my solution:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kong-rc
spec:
replicas: 1
template:
metadata:
labels:
name: kong-rc
app: kong
spec:
containers:
- name: consul-client
image: "consul:1.2.2"
args:
- "agent"
- "-advertise=$(PODIP)"
- "-bind=0.0.0.0"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-datacenter=dc1"
- "-data-dir=/consul/data"
env:
- name: PODIP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
resources:
limits:
cpu: "200m"
memory: 512Mi
requests:
cpu: "100m"
memory: 256Mi
ports:
- containerPort: 8500
name: ui-port
- containerPort: 8400
name: alt-port
- containerPort: 53
name: udp-port
- containerPort: 8443
name: https-port
- containerPort: 8080
name: http-port
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8600
name: consuldns
- containerPort: 8300
name: server
- name: dnsmasq
image: "andyshinn/dnsmasq:2.76"
imagePullPolicy: Always
args:
- "-S"
- "/consul/127.0.0.1#8600"
securityContext:
capabilities:
add:
- NET_ADMIN
resources:
limits:
cpu: "200m"
memory: 512Mi
requests:
cpu: "100m"
memory: 256Mi
ports:
- containerPort: 53
name: tcp-port
protocol: TCP
- containerPort: 53
name: udp-port
protocol: UDP
- name: kong
image: "kong:0.14.0-centos"
imagePullPolicy: Always
securityContext:
capabilities:
add:
- SYS_MODULE
- NET_ADMIN
- SYS_ADMIN
env:
- name: KONG_ADMIN_LISTEN
value: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_USER
value: kong
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_PG_HOST
value: postgres-0.postgres.gslb-qa.svc.cluster.local
- name: KONG_PROXY_ACCESS_LOG
value: "/var/log/proxy_access.log"
- name: KONG_ADMIN_ACCESS_LOG
value: "/var/log/admin_access.log"
- name: KONG_PROXY_ERROR_LOG
value: "/var/log/proxy_error.log"
- name: KONG_ADMIN_ERROR_LOG
value: "/var/log/admin_error.log"
- name: KONG_DNS_RESOLVER
value: "127.0.0.1:53"
- name: KONG_DNSMASQ
value: "off"
resources:
limits:
cpu: "2"
memory: 8G
requests:
cpu: "2"
memory: 6G
ports:
- name: admin
containerPort: 8001
protocol: TCP
- name: proxy
containerPort: 8000
protocol: TCP
- name: proxy-ssl
containerPort: 8443
protocol: TCP
- name: admin-ssl
containerPort: 8444
protocol: TCP
It should be noted that if your consul 's domain is cluster.local, you should modify it, Otherwise dnsmasq won't work
images: kong 0.14.0-centos my deployment file:
when i run:
it show that pod can run successful. but when i logon into the pod, i will find some error in KONG_PROXY_ERROR_LOG.