Closed rainest closed 2 years ago
This is strange.
The sevices created
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: ingress-controller
meta.helm.sh/release-namespace: kong-system
creationTimestamp: "2022-10-21T23:08:16Z"
labels:
app.kubernetes.io/instance: ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "3.0"
helm.sh/chart: kong-2.13.1
name: ingress-controller-kong-admin
namespace: kong-system
resourceVersion: "517"
uid: f1206ff4-5b07-445f-ab81-1cf741f80584
spec:
clusterIP: 10.96.243.28
clusterIPs:
- 10.96.243.28
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: kong-admin
port: 8001
protocol: TCP
targetPort: 8001
selector:
app.kubernetes.io/component: app
app.kubernetes.io/instance: ingress-controller
app.kubernetes.io/name: kong
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: ingress-controller
meta.helm.sh/release-namespace: kong-system
creationTimestamp: "2022-10-21T23:08:16Z"
labels:
app.kubernetes.io/instance: ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "3.0"
enable-metrics: "true"
helm.sh/chart: kong-2.13.1
name: ingress-controller-kong-proxy
namespace: kong-system
resourceVersion: "856"
uid: 13a30451-001f-43b6-81cd-200207f0ff56
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.96.158.116
clusterIPs:
- 10.96.158.116
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: kong-proxy
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8000
- appProtocol: https
name: kong-proxy-tls
nodePort: 30888
port: 443
protocol: TCP
targetPort: 8443
- name: stream-8888
nodePort: 30333
port: 8888
protocol: TCP
targetPort: 8888
- name: stream-9999
nodePort: 32497
port: 9999
protocol: TCP
targetPort: 9999
- name: stream-8899
nodePort: 31232
port: 8899
protocol: TCP
targetPort: 8899
selector:
app.kubernetes.io/component: app
app.kubernetes.io/instance: ingress-controller
app.kubernetes.io/name: kong
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.18.0.101
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-10-21T23:08:17Z"
name: ingress-controller-kong-udp
namespace: kong-system
resourceVersion: "854"
uid: 55938431-7a32-49d8-89b1-74f6154cdfdb
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.96.224.57
clusterIPs:
- 10.96.224.57
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: ingress-controller-kong-udp
nodePort: 31881
port: 9999
protocol: UDP
targetPort: 9999
selector:
app.kubernetes.io/component: app
app.kubernetes.io/instance: ingress-controller
app.kubernetes.io/name: kong
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.18.0.100
kind: List
metadata:
resourceVersion: ""
selfLink: ""
kong pod
apiVersion: v1
kind: Pod
metadata:
annotations:
kuma.io/gateway: enabled
kuma.io/service-account-token-volume: ingress-controller-kong-token
traffic.sidecar.istio.io/includeInboundPorts: ""
creationTimestamp: "2022-10-21T23:08:17Z"
generateName: ingress-controller-kong-74dfbb7bbc-
labels:
app: ingress-controller-kong
app.kubernetes.io/component: app
app.kubernetes.io/instance: ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "3.0"
helm.sh/chart: kong-2.13.1
pod-template-hash: 74dfbb7bbc
version: "3.0"
name: ingress-controller-kong-74dfbb7bbc-sbpwt
namespace: kong-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-controller-kong-74dfbb7bbc
uid: 3f46ae80-e26f-432d-8a5f-b91c086a35c2
resourceVersion: "743"
uid: 8a892c47-136d-485d-8ac3-5c550dc63f56
spec:
automountServiceAccountToken: false
containers:
- env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CONTROLLER_ELECTION_ID
value: kong-ingress-controller-leader-kong
- name: CONTROLLER_INGRESS_CLASS
value: kong
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_KONG_ADMIN_URL
value: http://localhost:8001
- name: CONTROLLER_PUBLISH_SERVICE
value: kong-system/ingress-controller-kong-proxy
image: kong/kubernetes-ingress-controller:2.7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: ingress-controller
ports:
- containerPort: 10255
name: cmetrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: ingress-controller-kong-token
readOnly: true
- env:
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_GUI_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_GUI_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001
- name: KONG_CLUSTER_LISTEN
value: "off"
- name: KONG_DATABASE
value: "off"
- name: KONG_KIC
value: "on"
- name: KONG_LUA_PACKAGE_PATH
value: /opt/?.lua;/opt/?/init.lua;;
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_PLUGINS
value: bundled
- name: KONG_PORTAL_API_ACCESS_LOG
value: /dev/stdout
- name: KONG_PORTAL_API_ERROR_LOG
value: /dev/stderr
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_PREFIX
value: /kong_prefix/
- name: KONG_PROXY_ACCESS_LOG
value: /dev/stdout
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
- name: KONG_PROXY_STREAM_ACCESS_LOG
value: /dev/stdout basic
- name: KONG_PROXY_STREAM_ERROR_LOG
value: /dev/stderr
- name: KONG_ROUTER_FLAVOR
value: traditional
- name: KONG_STATUS_ACCESS_LOG
value: "off"
- name: KONG_STATUS_ERROR_LOG
value: /dev/stderr
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_STREAM_LISTEN
value: 0.0.0.0:8888, 0.0.0.0:9999 udp reuseport, 0.0.0.0:8899 ssl reuseport
- name: KONG_NGINX_DAEMON
value: "off"
image: kong:3.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- kong
- quit
- --wait=15
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: proxy
ports:
- containerPort: 8001
name: admin
protocol: TCP
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-tls
protocol: TCP
- containerPort: 8888
name: stream-8888
protocol: TCP
- containerPort: 9999
name: stream-9999
protocol: TCP
- containerPort: 8899
name: stream-8899
protocol: TCP
- containerPort: 8100
name: status
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kong_prefix/
name: ingress-controller-kong-prefix-dir
- mountPath: /tmp
name: ingress-controller-kong-tmp
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- command:
- rm
- -vrf
- $KONG_PREFIX/pids
env:
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_GUI_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_GUI_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:8001
- name: KONG_CLUSTER_LISTEN
value: "off"
- name: KONG_DATABASE
value: "off"
- name: KONG_KIC
value: "on"
- name: KONG_LUA_PACKAGE_PATH
value: /opt/?.lua;/opt/?/init.lua;;
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_PLUGINS
value: bundled
- name: KONG_PORTAL_API_ACCESS_LOG
value: /dev/stdout
- name: KONG_PORTAL_API_ERROR_LOG
value: /dev/stderr
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_PREFIX
value: /kong_prefix/
- name: KONG_PROXY_ACCESS_LOG
value: /dev/stdout
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
- name: KONG_PROXY_STREAM_ACCESS_LOG
value: /dev/stdout basic
- name: KONG_PROXY_STREAM_ERROR_LOG
value: /dev/stderr
- name: KONG_ROUTER_FLAVOR
value: traditional
- name: KONG_STATUS_ACCESS_LOG
value: "off"
- name: KONG_STATUS_ERROR_LOG
value: /dev/stderr
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_STREAM_LISTEN
value: 0.0.0.0:8888, 0.0.0.0:9999 udp reuseport, 0.0.0.0:8899 ssl reuseport
image: kong:3.0
imagePullPolicy: IfNotPresent
name: clear-stale-pid
resources: {}
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kong_prefix/
name: ingress-controller-kong-prefix-dir
- mountPath: /tmp
name: ingress-controller-kong-tmp
nodeName: kong-control-plane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ingress-controller-kong
serviceAccountName: ingress-controller-kong
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir:
sizeLimit: 256Mi
name: ingress-controller-kong-prefix-dir
- emptyDir:
sizeLimit: 1Gi
name: ingress-controller-kong-tmp
- name: ingress-controller-kong-token
secret:
defaultMode: 420
items:
- key: token
path: token
- key: ca.crt
path: ca.crt
- key: namespace
path: namespace
secretName: ingress-controller-kong-token
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-10-21T23:08:54Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-10-21T23:09:35Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-10-21T23:09:35Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-10-21T23:08:35Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://ccffb01c5758da35b5d7cba4eb94c538f654c0c492e86dcd397aadd5e1de52a1
image: docker.io/kong/kubernetes-ingress-controller:2.7
imageID: docker.io/kong/kubernetes-ingress-controller@sha256:5616bab3246eba6ccfbd59a6739a36623e87f98e76262664cff60622e24fb3e5
lastState: {}
name: ingress-controller
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-10-21T23:09:15Z"
- containerID: containerd://350da0d1a7cdcd6253267af567e9e41dcf3d55e62cd45fefdb611d457ce5cf57
image: docker.io/library/kong:3.0
imageID: docker.io/library/kong@sha256:b2e287f0ce26074043dd9785c109a98e47c4cfaea6a81f5750a0ce2cd80b773f
lastState: {}
name: proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-10-21T23:09:16Z"
hostIP: 172.18.0.2
initContainerStatuses:
- containerID: containerd://1d72a07a68cfb6c51f0653ff470c8a6b775d479b74d6dfcfc8bafce9b3764930
image: docker.io/library/kong:3.0
imageID: docker.io/library/kong@sha256:b2e287f0ce26074043dd9785c109a98e47c4cfaea6a81f5750a0ce2cd80b773f
lastState: {}
name: clear-stale-pid
ready: true
restartCount: 0
state:
terminated:
containerID: containerd://1d72a07a68cfb6c51f0653ff470c8a6b775d479b74d6dfcfc8bafce9b3764930
exitCode: 0
finishedAt: "2022-10-21T23:08:53Z"
reason: Completed
startedAt: "2022-10-21T23:08:53Z"
phase: Running
podIP: 10.244.0.2
podIPs:
- ip: 10.244.0.2
qosClass: BestEffort
startTime: "2022-10-21T23:08:35Z"
udp ingress
apiVersion: v1
items:
- apiVersion: configuration.konghq.com/v1beta1
kind: UDPIngress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"configuration.konghq.com/v1beta1","kind":"UDPIngress","metadata":{"annotations":{"kubernetes.io/ingress.class":"kong"},"name":"udp-9999","namespace":"default"},"spec":{"rules":[{"backend":{"serviceName":"udp-echo-server","servicePort":33333},"port":9999}]}}
kubernetes.io/ingress.class: kong
creationTimestamp: "2022-10-21T23:42:48Z"
generation: 1
name: udp-9999
namespace: default
resourceVersion: "3160"
uid: ba5e303d-2a6f-40d5-89c6-0720b8809635
spec:
rules:
- backend:
serviceName: udp-echo-server
servicePort: 33333
port: 9999
status:
loadBalancer:
ingress:
- ip: 172.18.0.101
kind: List
metadata:
resourceVersion: ""
selfLink: ""
While the KONG_STREAM_LISTEN
seems valid, the proxy listens 0n 9999/TCP, so it should not work with UDP.
echo "payload" | nc -u 172.18.0.100 9999
Client address: 10.244.0.2:35674
Data sent by client:
b'payload\n'
^C
The payload was not echoed back, so it seems the packet was dropped.
Actually, it works.
Acording to https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#containerport-v1-core
ports ContainerPort array patch strategy: merge patch merge key: containerPort
List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
The service has correct port and protocol. The nginx create correct listener, binding to 0.0.0.0:9999 udp reuseport
. Thus it works.
I double verified that in tests the pods are reachable by udp kong service.
Change the configuration to use udpProxy instead of proxy and verify that UDP tests in KIC are similar are still working (or are working better?).
The UDP tests are using udpProxy, not proxy.
See https://github.com/Kong/kubernetes-ingress-controller/pull/2970#issuecomment-1256447417 for apparent occurrence during tests.
Kubernetes LoadBalancers do not support both TCP and UDP services even if the underlying implementation does. As such, the chart segregates UDP proxy configuration into its own section
The KTF addon configures these on
proxy
instead ofudpProxy
, however. It does set their protocol, but they should not be accessible through the LoadBalancer: https://github.com/Kong/kubernetes-testing-framework/blob/456371524d2a1a0c3496efe6c11e9a03dcb199e0/pkg/clusters/addons/kong/addon.go#L566-L567We should remove this related hack, as it's no longer relevant: https://github.com/Kong/kubernetes-testing-framework/blob/456371524d2a1a0c3496efe6c11e9a03dcb199e0/pkg/clusters/addons/kong/addon.go#L577-L605
Curiously, this does not appear to affect UDP tests in almost all runs. This warrants more investigation to explain why, but in any case, the configuration is incorrect and should be aligned with typical expected chart configuration.
Acceptance
udpProxy
instead ofproxy
and verify that UDP tests in KIC are similar are still working (or are working better?).