Closed gagarinfan closed 8 months ago
This predates the change to use the new config available readiness endpoint and was presumably using the old /status
endpoint, which would mark Kong containers ready regardless of whether or not they had configuration available. Newer versions of the ingress controller should have prevented Pods from becoming ready in sidecar mode, but it's not clear which version of the controller was used here.
AFAIK this will not happen on current versions (controller 3.0, Kong 3.5, chart 2.33). My test plan was:
kong.example
and the default self-signed certificate for wrong.example
(the latter does not match any configured route/SNI and should serve the fallback certificate).kong.example
to the proxy Service LB IP using curl while logging the presented certificate subject to a file.kubectl rollout restart
on the Kong Deployment.kong.example
certificate.Results did not indicate that any of the requests saw the fallback certificate:
pod_restart_status.txt curl_test.txt
Based on my knowledge of the old and new readiness behaviors, the original report was likely caused by the old behavior bringing Pods into Service before they were truly ready. Testing indicates that the new readiness behavior is behaving as expected and that kube-proxy will not dispatch requests to instances before they've received configuration. I'm closing this as outdated; if you continue to see incorrect certificates served with current versions, please respond back with updated replication steps.
Faced similar issue, After configuring all the ClusterIssuer and obtaining Certificate, adding the protocol annotation helped me with using right certificate.
apiVersion: v1
kind: Service
metadata:
name: service-basic-nuxt-app
# annotations:
# konghq.com/protocol: https
# konghq.com/plugins: rate-limit-5-min
spec:
selector:
app: basic-nuxt-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ing-basic-nuxt-app
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
konghq.com/preserve-host: "true"
konghq.com/strip-path: "true"
konghq.com/protocols: "https"
# redirects to HTTPS
konghq.com/https-redirect-status-code: "301"
spec:
ingressClassName: kong
tls:
- secretName: k8-prod-certs
hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: service-basic-nuxt-app
port:
number: 3000
Is there an existing issue for this?
Current Behavior
When I call the URL exposed with Ingress during KIC pod rollouts I receive errors regarding self-signed certificates for a moment and then it starts returning a proper one (LetsEncrypt):
In the logs we observed:
We're running KIC integrated with AWS NLB on EKS. Runs as Daemonset on all nodes (number of running pods depends on how many nodes are added by autoscaler[karpenter]).
Certificate is not changing during the KIC deployment/rollouts.
Expected Behavior
When I call the any URL exposed with KIC in our cluster I should receive "no route to host" "service unavailable" or preferably traffic should not be directed to the pods that hasn't started properly yet.
Steps To Reproduce
Kong Ingress Controller version
Kubernetes version
Anything else?
values.yaml file