Closed tholvoleak closed 2 years ago
@tholvoleak: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Change the service web to --type clusterIP
Thanks, ; Long
On Tue, 28 Dec, 2021, 10:09 AM Kubernetes Prow Robot, < @.***> wrote:
@tholvoleak https://github.com/tholvoleak: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here https://git.k8s.io/community/contributors/guide/pull-requests.md. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue: repository.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8079#issuecomment-1001865431, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWTIHYLNURA4JNOHRUTUTE5RFANCNFSM5K3N75WQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
/close
Change the service web to --type clusterIP Thanks, ; Long
I have changed, but still does not working. i updated info above
Hi, this is a basic functionality of the ingress-nginx-controller. So its not a bug and it seems like you are asking for support. Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .
Later if you find a bug or a problem, you can reopen this issue. So i will close for now. Thanks.
/close
@longwuyuan: Closing this issue.
Hi, this is a basic functionality of the ingress-nginx-controller. So its not a bug and it seems like you are asking for support. Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .
Later if you find a bug or a problem, you can reopen this issue. So i will close for now. Thanks.
/close
Hi brother,
As it's a basic functionality of the ingress-nginx-controller. how to allow ingress-nginx-controller to application pod? because it's unreachable.
I thought the flow is ingress-nginx-controller -> service -> pod.
Please discuss in the ingress-nginx-users channel at kubernetes.slack.com. You can register if required at slack.k8s.io .
Hi, I have set up RKE Kubernetes cluster, I have tried to deploy an application and create ingress to expose external access. but I got an issue with "502 Bad Gateway".
cat nginx-app.yml
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-app spec: selector: matchLabels: run: nginx-app replicas: 3 template: metadata: labels: run: nginx-app spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80
cat nginx-service.yml
apiVersion: v1 kind: Service metadata: name: nginx-service spec: ports: - port: 8080 protocol: TCP targetPort: 80 selector: run: nginx-app
cat nginx-ingress.yml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress spec: rules: - http: paths: - path: /demo pathType: Prefix backend: service: name: nginx-service port: number: 8080
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-app-744fc45d8f-drnml 1/1 Running 0 14m 10.42.0.16 10.*.*.207 <none> <none> nginx-app-744fc45d8f-lc9zn 1/1 Running 0 14m 10.42.0.15 10.*.*.207 <none> <none> nginx-app-744fc45d8f-njjkr 1/1 Running 0 14m 10.42.0.14 10.*.*.207 <none> <none>
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-service ClusterIP 10.43.89.106 <none> 8080/TCP 8m2s run=nginx-app
kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE nginx-ingress <none> * 10.*.*.207 80 22m
<html> <head><title>502 Bad Gateway</title></head> <body> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx</center> </body> </html>
Error logs of pod nginx-ingress controller
2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207" 2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207" 2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207" 10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20
I wonder why nginx controller is trying to connect the application pod nginx-app (upstream: "http://10.42.0.16:80/demo"") not service nginx-service ???
I've got this issue in bare-metal (Fedora) microk8s setup. The root cause the firewalld was running. Once disabled it and ensured that iptables legacy are running and it's not nftables then Ingress was able to reach pod by its IP and returned requested page. Firewalld cannot create some rules required by k8s and it's also displayed in firewalld logs.
To troubleshoot your issue you can post output of next command and upload log here:
kubectl get all -A && echo && kubectl get nodes && echo && kubectl cluster-info
kubectl delete -f nginx-app.yml
kubectl delete -f nginx-service.yml
kubectl delete -f nginx-ingress.yml
journalctl -f > journalctl.log
kubectl apply -f nginx-app.yml
kubectl apply -f nginx-service.yml
kubectl apply -f nginx-ingress.yml
The logs:
2021/12/28 06:17:41 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.16:80/demo", host: "10.*.*.207"
2021/12/28 06:17:42 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.14:80/demo", host: "10.*.*.207"
2021/12/28 06:17:43 [error] 3256#3256: *411627 connect() failed (113: Host is unreachable) while connecting to upstream, client: 10.*.*.207, server: _, request: "GET /demo HTTP/1.1", upstream: "http://10.42.0.15:80/demo", host: "10.*.*.207"
10.*.*.207 - - [28/Dec/2021:06:17:43 +0000] "GET /demo HTTP/1.1" 502 150 "-" "curl/7.61.1" 82 3.068 [ingress-nginx-nginx-service-8080] [] 10.42.0.16:80, 10.42.0.14:80, 10.42.0.15:80 0, 0, 0 1.020, 1.024, 1.024 502, 502, 502 93cf678d8d8710e02845a378cd59ed20
means that nginx is accessing at application since the endpoint 10.42.0.15:80
.
This socket, is the endpoint of you service. You can see it, do it:
kubectl get endpoints -n nginx-service
In this case, is the endpoints of service nginx-service
.
But seeing that throw a 502 Bad Gateway
and the logs, this means that the ingress controller is trying access at service via endpoint (trying with all the endpoints of ingress controller). And the ingress controller's pod cannot access.
For test it, entry in the pod of ingress controller and checks the connection.
$ kubectl exec -it pod/ingress-nginx-controller-57ff8464d9-pvjpc -- bash
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ nc -zv 10.42.0.16 80
nc: 10.85.0.12 (10.85.0.12:8080): Host is unreachable
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$
As we see exactly , this cannot access.
You look that IP has the service nginx-service
and try access
$ kubectl describe service
$ kubectl exec -it pod/ingress-nginx-controller-57ff8464d9-pvjpc -- bash
ingress-nginx-controller-57ff8464d9-pvjpc:/etc/nginx$ nc -zv 10.43.89.106 8080
10.43.89.106 (10.43.89.106:8080) open
And as we see, the pod has access. With the ClusterIP and Port of the service.
So that a solution would be do the follow.
You must tell at Ingress, that uses the ClusterIP:port instead of use endpoints list of ingress controller.
For this you edit the Ingress resource and add the follow annotation
.
nginx.ingress.kubernetes.io/service-upstream: "true"
FYI
By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.
The nginx.ingress.kubernetes.io/service-upstream
annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.
This can be desirable for things like zero-downtime deployments . See issue #257.
If the service-upstream
annotation is specified the following things should be taken into consideration:
proxy_next_upstream
directive will not have any effect meaning on error the request will not be dispatched to another upstream.
Hi, I have set up RKE Kubernetes cluster, I have tried to deploy an application and create ingress to expose external access. but I got an issue with "502 Bad Gateway".
cat nginx-app.yml
cat nginx-service.yml
cat nginx-ingress.yml
kubectl get pod -o wide
kubectl get svc -o wide
kubectl get ingress -o wide
curl http://10.*.*.207/demo
Error logs of pod nginx-ingress controller
I wonder why nginx controller is trying to connect the application pod nginx-app (upstream: "http://10.42.0.16:80/demo"") not service nginx-service ???