Closed aduncmj closed 2 years ago
Hi,
I have.
The validatingwebhook service is not reachable in my private GKE cluster. I needed to open the 8443 port from the master to the pods. On top of that, I then received a certificate error on the endpoint "x509: certificate signed by unknown authority". To fix this, I needed to include the caBundle from the generated secret in the validatingwebhookconfiguration.
A quick fix if you don't want to do the above and have the webhook fully operational is to remove the validatingwebhookconfiguration or setting the failurePolicy to Ignore.
I believe some fixes are needed in the deploy/static/provider/cloud/deploy.yaml as the webhooks will not always work out of the box.
A quick update on the above, the certificate error should be managed by the patch job that exists in the deployment so that part should be a non-issue. Only the port 8443 needed to be opened from master to pods for me.
A quick update on the above, the certificate error should be managed by the patch job that exists in the deployment so that part should be a non-issue. Only the port 8443 needed to be opened from master to pods for me.
Hi, I am a beginner in setting a k8s and ingress. I am facing a similar issue. But more in a baremetal scenario. It would be very grateful if you can please share more details on what you mean by 'opening a port between master and pods'?
Update: sorry, as I said, I am new to this. I checked there is a service (ingress-nginx-controller-admission) which is exposed to node 433 running from the ingress-nginx namespace. And for some reason my ingress resource trying to run from default namespace is not able to communicate to it. Please suggest on how I can resolve this.
error is :
Error from server (InternalError): error when creating "test-nginx-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded
I'm also facing this issue, on a fresh cluster from AWS where I only did
helm install nginx-ing ingress-nginx/ingress-nginx --set rbac.create=true
And deployed a react service (which I can port-forward to and it works fine).
I then tried to apply both my own ingress and the example ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
I'm getting this error:
Error from server (InternalError): error when creating "k8s/ingress/test.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://nginx-ing-ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: stream error: stream ID 7; INTERNAL_ERROR
I traced it down to this loc
by looking at the logs in the controller:
https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/controller.go#L532
Logs:
I0427 11:52:35.894902 6 server.go:61] handling admission controller request /extensions/v1beta1/ingresses?timeout=30s
2020/04/27 11:52:35 http2: panic serving 172.31.16.27:39304: runtime error: invalid memory address or nil pointer dereference
goroutine 2514 [running]:
net/http.(*http2serverConn).runHandler.func1(0xc00000f2c0, 0xc0009a9f8e, 0xc000981980)
/home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/h2_bundle.go:5713 +0x16b
panic(0x1662d00, 0x27c34c0)
/home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/runtime/panic.go:969 +0x166
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getBackendServers(0xc000119a40, 0xc00000f308, 0x1, 0x1, 0x187c833, 0x1b, 0x185e388, 0x0, 0x185e388, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:532 +0x6d2
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getConfiguration(0xc000119a40, 0xc00000f308, 0x1, 0x1, 0x1, 0xc00000f308, 0x0, 0x1, 0x0)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:402 +0x80
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).CheckIngress(0xc000119a40, 0xc000bfc300, 0x50a, 0x580)
/tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:228 +0x2c9
k8s.io/ingress-nginx/internal/admission/controller.(*IngressAdmission).HandleAdmission(0xc0002d4fb0, 0xc000943080, 0x7f8ffce8b1b8, 0xc000942ff0)
/tmp/go/src/k8s.io/ingress-nginx/internal/admission/controller/main.go:73 +0x924
k8s.io/ingress-nginx/internal/admission/controller.(*AdmissionControllerServer).ServeHTTP(0xc000219820, 0x1b05080, 0xc00000f2c0, 0xc000457d00)
/tmp/go/src/k8s.io/ingress-nginx/internal/admission/controller/server.go:70 +0x229
net/http.serverHandler.ServeHTTP(0xc000119ce0, 0x1b05080, 0xc00000f2c0, 0xc000457d00)
/home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/server.go:2807 +0xa3
net/http.initALPNRequest.ServeHTTP(0x1b07440, 0xc00067f170, 0xc0002dc700, 0xc000119ce0, 0x1b05080, 0xc00000f2c0, 0xc000457d00)
/home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/server.go:3381 +0x8d
net/http.(*http2serverConn).runHandler(0xc000981980, 0xc00000f2c0, 0xc000457d00, 0xc000a81480)
/home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/h2_bundle.go:5720 +0x8b
created by net/http.(*http2serverConn).processHeaders
/home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/h2_bundle.go:5454 +0x4e1
Any ideas? Seems strange to get this on a newly setup cluster where I followed the instructions correctly.
I might have solved it..
I followed this guide for the helm installation: https://kubernetes.github.io/ingress-nginx/deploy/
But when I followed this guide instead: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/
The error doesn't occur.
If you have this issue try it out by deleting your current helm installation.
Get the name:
helm list
Delete and apply stable release:
helm delete <release-name>
helm repo add nginx-stable https://helm.nginx.com/stable
helm install nginx-ing nginx-stable/nginx-ingress
@johan-lejdung not really, that is a different ingress controller.
@aledbf I use 0.31.1 still has same problem
bash-5.0$ /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.31.1
Build: git-b68839118
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.10
-------------------------------------------------------------------------------
Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded
@aledbf Same error. Bare-metal installation.
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.31.1
Build: git-b68839118
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.10
-------------------------------------------------------------------------------
Error from server (InternalError): error when creating "./**ommitted**.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded
I added a note about the webhook port in https://kubernetes.github.io/ingress-nginx/deploy/ and the links for the additional steps in GKE
i still have the problem
i disable the webhook, the error go away
helm install my-release ingress-nginx/ingress-nginx \ --set controller.service.type=NodePort \ --set controller.admissionWebhooks.enabled=false
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/a-service ClusterIP 10.105.159.98
NAME READY STATUS RESTARTS AGE pod/a-deployment-84dcd8bbcc-tgp6d 1/1 Running 0 28h pod/b-deployment-f649cd86d-7ss9f 1/1 Running 0 28h pod/configmap-pod 1/1 Running 0 54m pod/configmap-pod-1 1/1 Running 0 3h33m pod/my-release-ingress-nginx-controller-7859896977-bfrxp 1/1 Running 0 111m pod/redis 1/1 Running 1 6h11m pod/test 1/1 Running 1 5h9m
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: example
spec: rules:
host: b.abbetwang.top http: paths:
tls:
when i run kubectl apply -f new-ingress.yaml i got Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io:
I0504 06:22:13.286582 1 trace.go:116] Trace[1725513257]: "Create" url:/apis/networking.k8s.io/v1beta1/namespaces/default/ingresses,user-agent:kubectl/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:192.168.0.133 (started: 2020-05-04 06:21:43.285686113 +0000 UTC m=+59612.475819043) (total time: 30.000880829s): Trace[1725513257]: [30.000880829s] [30.000785964s] END W0504 09:21:19.861015 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0504 09:31:49.897548 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted I0504 09:36:17.637753 1 trace.go:116] Trace[615862040]: "Call validating webhook" configuration:my-release-ingress-nginx-admission,webhook:validate.nginx.ingress.kubernetes.io,resource:networking.k8s.io/v1beta1, Resource=ingresses,subresource:,operation:CREATE,UID:41f47c75-9ce1-49c0-a898-4022dbc0d7a1 (started: 2020-05-04 09:35:47.637591858 +0000 UTC m=+71256.827724854) (total time: 30.000128816s): Trace[615862040]: [30.000128816s] [30.000128816s] END W0504 09:36:17.637774 1 dispatcher.go:133] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://my-release-ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded
Why close this issue? What is the solution?
@eltonbfw update to 0.32.0 and make sure the API server can reach the POD running the ingress controller
@eltonbfw update to 0.32.0 and make sure the API server can reach the POD running the ingress controller
I have the same problem,and i use 0.32.0. What's the solution? Pleast, thanks!
For the specific issue, my problem did turn out to be an issue with internal communication. @aledbf added notes to the documentation to verify connectivity. I had internal communication issues caused by Centos 8's move to nftables. In my case, I needed additional "rich" allow rules in firewalld for:
I have the same issue, baremetal install with CentOS 7 worker nodes.
Have the same issue with 0.32.0 on HA baremetal cluster with strange behaviour: Have two ingresses A and B:
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service-alpha
namespace: staging
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: alpha.example.org
http:
paths:
- path: /
backend:
serviceName: service-alpha
servicePort: 1080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: service-beta
namespace: staging
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: beta.example.org
http:
paths:
- path: /user/(.*)
backend:
serviceName: service-users
servicePort: 1080
- path: /data/(.*)
backend:
serviceName: service-data
servicePort: 1080
# kubectl apply -f manifests/ingress-beta.yml
Error from server (InternalError): error when creating "manifests/ingress-beta.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
In the api-server logs errors look like that
I0530 08:05:56.884549 1 trace.go:116] Trace[898207247]: "Call validating webhook" configuration:ingress-nginx-admission,webhook:validate.nginx.ingress.kubernetes.io,resource:networking.k8s.io/v1beta1, Resource=ingresses,subresource:,operation:CREATE,UID:fdce95ab-e2a9-40f5-9ab3-73a85b603db6 (started: 2020-05-30 08:05:26.883895783 +0000 UTC m=+5434.178340436) (total time: 30.000569226s):
Trace[898207247]: [30.000569226s] [30.000569226s] END
W0530 08:05:56.884664 1 dispatcher.go:133] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0530 08:05:56.885303 1 trace.go:116] Trace[868353513]: "Create" url:/apis/networking.k8s.io/v1beta1/namespaces/staging/ingresses,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-05-30 08:05:26.882592405 +0000 UTC m=+5434.177037017) (total time: 30.002669278s):
Trace[868353513]: [30.002669278s] [30.002248351s] END
The main question is why the first ingress is created the most of times and the second is always failed to create?
Upd. Also this comment on SO might be useful in investigating causes of problems.
Upd 2. When rewrite annotation is removed, the manifest is applied without errors.
Upd 3. It fails in combination with multiple paths and with rewrite annotation.
@aledbf Looks like a bug.
We have this issue on baremetal k3s cluster. Our http proxy logged these traffic.
gost[515]: 2020/06/09 15:15:37 http.go:151: [http] 192.168.210.21:47396 -> http://:8080 -> ingress-nginx-controller-admission.ingress-nginx.svc:443
gost[515]: 2020/06/09 15:15:37 http.go:241: [route] 192.168.210.21:47396 -> http://:8080 -> ingress-nginx-controller-admission.ingress-nginx.svc:443
gost[515]: 2020/06/09 15:15:37 http.go:262: [http] 192.168.210.21:47396 -> 192.168.210.1:8080 : dial tcp: lookup ingress-nginx-controller-admission.ingress-nginx.svc on 192.168.210.1:53: no such host
@eltonbfw update to 0.32.0 and make sure the API server can reach the POD running the ingress controller
I have the same problem,and i use 0.32.0. What's the solution? Pleast, thanks!
me too
If you are using the baremetal install from Kelsey Hightower, my suggestion is to install kubelet on your master nodes, start calico/flannel or whatever you use for CNI, label your nodes as masters so you have no other pods started there and then your control-plane would be able to communicate with your nginx deployment and the issue should be fixed. At least this is how it worked for me.
@aledbf This issue still occurs
@andrei-matei Kelsey's cluster works perfectly even without additional CNI plugins and kubelet SystemD services installed on master nodes. All you need is to add a route to Services' CIDR 10.32.0.0/24
using worker node IPs as "next-hop" on master nodes only.
In this way I've got ingress-nginx
(deployed from "bare-metal" manifest) and cert-manager
webhooks working, but unfortunately not together :( still doesn't know why...
Updated: got both of them working
@aduncmj I found this solution https://stackoverflow.com/questions/61365202/nginx-ingress-service-ingress-nginx-controller-admission-not-found
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
@aduncmj i did the same, thank you for sharing the findings. I m curious if this can be handled without manual intervention.
@opensourceonly This worked for me, you can try it, you should add a pathType for Ingress configuration. https://github.com/kubernetes/ingress-nginx/pull/5445
Hi,
I have.
The validatingwebhook service is not reachable in my private GKE cluster. I needed to open the 8443 port from the master to the pods. On top of that, I then received a certificate error on the endpoint "x509: certificate signed by unknown authority". To fix this, I needed to include the caBundle from the generated secret in the validatingwebhookconfiguration.
A quick fix if you don't want to do the above and have the webhook fully operational is to remove the validatingwebhookconfiguration or setting the failurePolicy to Ignore.
I believe some fixes are needed in the deploy/static/provider/cloud/deploy.yaml as the webhooks will not always work out of the box.
@moljor I have the same quest about :
On top of that, I then received a certificate error on the endpoint "x509: certificate signed by unkno wn authority". To fix this, I needed to include the caBundle from the generated secret in the validatingwebhookconfiguration.
how do you make the setup?
I used openssl tool to make ssl file ;then make secret ; but I do not know how to make validatingwebhookconfiguration good
please help me
@liminghua999 If you check the deploy yaml, the patch job should "make the validatingwebhookconfiguration good". It exists to update it with the secret.
@liminghua999 If you check the deploy yaml, the patch job should "make the validatingwebhookconfiguration good". It exists to update it with the secret.
@moljor
HI moljor : thanks a lot for your answer
I get deploy.yaml file from
https://github.com/kubernetes/ingress-nginx/mirrors/ingress-nginx/raw/master/deploy/static/provider/baremetal/deploy.yaml
the ValidatingWebhookConfiguration.webhooks.clientConfig do not config caBundle ;How do I config it by myself
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.11.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.34.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
@liminghua999 check https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/deploy.yaml and the last 2 batch jobs. It creates and updates everything. (no need to create a secret yourself)
Otherwise reading a bit of the k8s documentation might be helpful if you want to do things yourself: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
so, what's the solutions?
@aledbf can you please reopen this issue? A huge number of people are having the same problem, so this issue definitely isn't resolved. The instructed solution isn't clear in either the documentation nor the issue comments.
I'm seeing the most common reply here is "turn off webhook validation", but turning off validation doesn't mean the error has gone away, just that it's no longer being reported.
so, what's the solutions?
I had a similar problem (but with "connection refused" rather than "context deadline exceeded", as reason).
The solution of @lbs-rodrigo, deleting the ingress so that it can be recreated according to the config, with kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
, fixed my problem.
If your configuration is correct, then give it a try.
hello, i use version 0.30 to solve this problem,hah
------------------ 原始邮件 ------------------ 发件人: "kubernetes/ingress-nginx" <notifications@github.com>; 发送时间: 2020年9月9日(星期三) 凌晨1:47 收件人: "kubernetes/ingress-nginx"<ingress-nginx@noreply.github.com>; 抄送: "小伙子很皮啊"<271138425@qq.com>;"Comment"<comment@noreply.github.com>; 主题: Re: [kubernetes/ingress-nginx] Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io" (#5401)
so, what's the solutions?
I had a similar problem (but with "connection refused" rather than "context deadline exceeded", as reason).
The solution of @lbs-rodrigo, deleting the ingress so that it can be recreated according to the config, with kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission, fixed my problem. If your configuration is correct, then give it a try.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
As mentioned by @cnlong I also updated mine to version v0.34.1 and did not need to remove ValidatingWebhook but I had to change the number of pods in the ingress deploy to replicate on all my nodes.
I've tried to upgrade from the deprecated helm chart stable/nginx-ingress
to ingress-nginx/ingress-nginx
(app version 0.35.0) and my ingress deployment crashes with:
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://nginx-ingress-ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.100.146.146:443: connect: connection refused
I used minimal configuration shown in the documentaion but Ingress resource give same error
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: rules:
- http: paths:
- path: / pathType: Prefix backend: service: name: frontend port: number: 80
Error from server: error when creating "disabled/my-ingress-prod-v3.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: rejecting admission review because the request does not contains an Ingress resource but networking.k8s.io/v1, Resource=ingresses with name minimal-ingress in namespace my-pro
This is still an issue - using version: v0.35.0
.
kubectl apply -f ingress-single.yaml --kubeconfig=/home/mansaka/softwares/k8sClusteryaml/kubectl.yaml worked for me
Solution: delete your ValidatingWebhookConfiguration
kubectl get -A ValidatingWebhookConfiguration
NAME
nginx-ingress-ingress-nginx-admission
kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-ingress-nginx-admission
The solution from vosuyak worked for me, using kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
when currently using the namespace where I'm applying the ingress rules.
Update on 2020-10-07
In my scenario, the problem is caused by custom CNI plugin weave-net, which makes the API server not able to reach the overlay network. The solution is either using the EKS default CNI plugin, or adding hostNetwork: true
to the ingress-nginx-controller-admission
Deployment spec. But the latter has some other issues that one needs to care about.
----------------Original comment----------------
Removing the ValidatingWebhookConfiguration
only disable the validation. Your ingress may get persisted, but once your ingress has some configuration error, you nginx ingress controller will be doomed.
I don't think the PathType fix 5445 has something to do with this error. It says
Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded
which IMHO tells that the ingress admission service cannot be reached from the control plane(8443 port is the default port exposed from pod, and 443 is the service exposed for the pod/deployment).
I'm encountering this error in AWS EKS, K8S version 1.17. It occured to me this might have something to do with security group settings. But I tried every possible way to make sure the control plane can reach the worker node on any port, but still the problem cannot be resolved. 😞
I think so, deleting ValidatingWebhookConfiguration is not a good solution, because it is very unsafe.But I don't know how to solve this problem. I used the same steps and there is no problem in Kubernetes v1.17.5. But in Kubernetes v1.19.x there will be an error:
Error from server (InternalError): error when creating "/root/ingress-v1.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not ingress-nginx-controller-admission.ingress-nginx.svc
Chart Version:
# cat Chart.yaml
apiVersion: v1
appVersion: 0.35.0
description: Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer
home: https://github.com/kubernetes/ingress-nginx
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:
- ingress
- nginx
kubeVersion: '>=1.16.0-0'
maintainers:
- name: ChiefAlexander
name: ingress-nginx
sources:
- https://github.com/kubernetes/ingress-nginx
version: 3.3.0
Is there any other solution besides deleting ValidatingWebhookConfiguration?
Same for me.
context deadline exceeded
I get same error when I have two ingress controllers(nginx and aws alb) deployed in the eks cluster. When my helm installation tries to create an ingress with class =alb, this web hook is called and results in error. Is there a way to limit this webhook just for nginx ingresses?
Would you please reopen this issue @aledbf ?
This error means that the Kubernetes API Server can't connect to the admission webhook (a workload running inside the Kubernetes cluster).
Solution for GKE is actually perfectly documented: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#console_9. Just create a firewall rule to allow API Server -> workload traffic.
For other Kubernetes deployments try to login to the API Server host and connect to the provided URL yourself. If it doesn't work, figure out routing, firewalls and name resolution.
@amlinux Using GKE and adding the rule did not help, unfortunately. I have following firewall rules and it still does not work:
`
Name Type Targets Filters Protocols/ports Action Priority Logs Hit count Last hit
gke-allow-http-s-vo-dev3
Ingress
Apply to all
IP ranges: 0.0.0.0/0
tcp:80,443
Allow
1000
Off
—
—
gke-allow-master-vo-dev3
Ingress
Apply to all
IP ranges: 172.16.10.0/28
tcp:443,10250,80
Allow
1000
Off
—
—
gke-vo-dev3-d5b3cd68-all
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 10.0.0.0/14
tcp
udp
esp; ah;
Allow
1000
Off
—
—
gke-vo-dev3-d5b3cd68-master
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 172.16.10.0/28
tcp:10250,443
Allow
1000
Off
—
—
gke-vo-dev3-d5b3cd68-vms
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 172.16.0.0/28
tcp:1-65535
udp:1-65535
icmp
Allow
1000
Off
—
—
k8s-a13d627c779ffa7b-node-http-hc
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 130.211.0.0/22,
tcp:10256
Allow
1000
Off
—
—
k8s-fw-a52358d4ebd364640b91ca2f4dd1b190
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 0.0.0.0/0
tcp:80,443
Allow
1000
Off
—
—
`
What are your master network range and GKE node label? Which of the rules is supposed to allow master traffic to the nodes?
Master network range is: 172.16.10.0/28
By GKE node label you mean k8s labels?
Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=n1-standard-2 beta.kubernetes.io/os=linux cloud.google.com/gke-nodepool=node-pool-dev3 cloud.google.com/gke-os-distribution=cos cloud.google.com/gke-preemptible=true failure-domain.beta.kubernetes.io/region=europe-north1 failure-domain.beta.kubernetes.io/zone=europe-north1-a kubernetes.io/arch=amd64 kubernetes.io/hostname=gke-vo-dev3-node-pool-dev3-46fcaf79-twx7 kubenetes.io/os=linux
And this rule I think allows the master traffic to the nodes. -->gke-allow-master-vo-dev3
Unfortunately formatting has been lost in copy-paste, and now it's very hard to say what your rules do.
gke-allow-master-vo-dev3 doesn't seem to be the right one as it only allows ports 443 and 10250 (https and standard kubelet ports). What you need to open, is traffic from master to the port that the admission webhook is listening on.
To make it simple, open all ports from master range to all nodes, and maybe also to the secondary range of the cluster (ips allocated to pods), make sure that everything works, and then step back and tighten the rules.
When opened all tcp ports for the rule as below it works: gke-allow-master-vo-dev3 Logs Off view Networkvpc-network-dev3 Priority1000 DirectionIngress Action on matchAllow Source filters IP ranges 172.16.10.0/28 Protocols and ports tcp EnforcementEnabled Insights None
How to tighten them? How to check on which ports it listens as it seems to me it is 443?
k describe ValidatingWebhookConfiguration ingress-nginx-admission
Name: ingress-nginx-admission
Namespace:
Labels: app.kubernetes.io/component=admission-webhook
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=0.40.2
helm.sh/chart=ingress-nginx-3.7.1
Annotations:
However when I go back to open 80,443, 10250 on that rule it does not work.
Hi all,
When I apply the ingress's configuration file named ingress-myapp.yaml by command
kubectl apply -f ingress-myapp.yaml
, there was an error. The complete error is as follows:This is my ingress:
Has anyone encountered this problem?