Closed troian closed 7 years ago
I sometimes see the same problem and my guess is the controller does not pick up updated/new ingress rules populated in the meantime. (I think this started happening for me when going from nginx-ingress-controller:0.9.0-beta.5
to nginx-ingress-controller:0.9.0-beta.7
)
The only thing working for me was to gradually restart the old nginx-ingress instances. The fresh ones work as expected.
Here is a bash-script, which does these restarts:
#!/bin/bash -
set -o nounset
BASE=$(cd "$(dirname "$0")" && pwd)
pushd "${BASE}"
for i in $(kubectl get pods -n kube-system | grep nginx-ingress-lb | awk '{print $1}')
do
echo "will kill ${i}"
kubectl delete "pod/${i}" -n kube-system
echo "Waiting 30 seconds for new pod to come up before killing next old pod..."
sleep 30
done
@weitzj I wonder if this may be related to https://github.com/kubernetes/ingress/issues/768 - especially if a restart fixes the problem.
@weitzj please update the image to quay.io/aledbf/nginx-ingress-controller:0.132
(current master)
@weitzj restart does not work for my case. @aledbf does your ingress 0.132 contain something specific to that issue? Anyway I'll try it soon
@troian the fix for 768 and PRs 822, 823 and 824
@aledbf Your image quay.io/aledbf/nginx-ingress-controller:0.132
works for me.
The steps I took:
kubectl describe pod/nginx-ingress-...
to see, whether your image is in use (it is by showing git-1ea89a61
Btw.:
The nginx controller runs using the cluster-admin
Role for now, since I thought RBAC might be an issue.
@aledbf thanks
The issue I wonder is why it produces Fake certificate even if --default-ssl-certificate specified in argument and ingress contains only one domain with same certificate chain
@troian I also see these 503 timeouts with the current quay.io/aledbf/nginx-ingress-controller:0.132
- but only if liveness/readiness probes did not succeed.
But I guess this is the intended behaviour, which makes sense to me.
but only if liveness/readiness probes did not succeed.
There is nothing we can do to avoid 503 in that situation
@weitzj, @aledbf ok, make sense. I'm not familiar with that yet. Any particular reason they might not succeed? Even in 5 minutes after pod start One of root-cause (presumably) that chrome shows such error if ingress returns Fake Certificate
Seems image quay.io/aledbf/nginx-ingress-controller:0.132 helps. Thanks everyone Resolving
I sometimes see the same problem and my guess is the controller does not pick up updated/new ingress rules populated in the meantime. (I think this started happening for me when going from
nginx-ingress-controller:0.9.0-beta.5
tonginx-ingress-controller:0.9.0-beta.7
)The only thing working for me was to gradually restart the old nginx-ingress instances. The fresh ones work as expected.
Here is a bash-script, which does these restarts:
#!/bin/bash - set -o nounset BASE=$(cd "$(dirname "$0")" && pwd) pushd "${BASE}" for i in $(kubectl get pods -n kube-system | grep nginx-ingress-lb | awk '{print $1}') do echo "will kill ${i}" kubectl delete "pod/${i}" -n kube-system echo "Waiting 30 seconds for new pod to come up before killing next old pod..." sleep 30 done
works then for minikube as well with
kubectl get pods -n kube-system --selector="app.kubernetes.io/name=nginx-ingress-controller" -oname
I'm experiencing often 503 response from nginx-ingress-controller which returns as well Kubernetes Ingress Controller Fake Certificate (2) instead of provided wildcard certificate. Image is gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.7
Looks like at some point nginx cannot resolve proper server_name and returns fake. But then why it ignores --default-ssl-certificate argument. Anyway I'm out of thoughts thus any help appreciated
Cluster is running at GKE
<
<
<
apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: kube-system labels: app: nginx-ingress spec: type: LoadBalancer ports:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: redirected-environment.trysimply.com namespace: kube-system annotations: ingress.kubernetes.io/auth-signin: "https://environment.trysimply.com/oauth2/sign_in" ingress.kubernetes.io/auth-url: "https://environment.trysimply.com/oauth2/auth" kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/rewrite-target: / spec: tls:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: oauth2-proxy namespace: kube-system annotations: kubernetes.io/ingress.class: "nginx" spec: tls:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress namespace: kube-system spec: replicas: 1 strategy: type: Recreate revisionHistoryLimit: 1 template: metadata: labels: k8s-app: nginx-ingress spec: terminationGracePeriodSeconds: 60 containers:
- --watch-namespace=kube-system
- --ingress-class=nginx
- --force-namespace-isolation=true
- --healthz-port=10254
- --logtostderr
daemon off;
worker_processes 1; pid /run/nginx.pid;
worker_rlimit_nofile 1047552; events { multi_accept on; worker_connections 16384; use epoll; }
http { set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For;
}
stream { log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
}