Open amarkevich opened 1 year ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
502 is a backend error, you should check your service
502 is a backend error, you should check your service
upstream: "https://0.0.0.1:80/context/" means due to hostname resolution issue request goes to 0.0.0.1 instread of service/pod IP
so hostname resolution issues are better discussed in sig-networking channel of K8S slack maybe. I am curious about the problem in ingress-controller on this issue.
/remove-kind bug
@longwuyuan this is a part of NGINX configuration:
http {
upstream upstream_balancer {
server 0.0.0.1; # placeholder
same time another application works correctly
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app2
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "AUTH_SESSION_ID"
spec:
tls:
- hosts:
- app2.corp.com
secretName: server-tls
rules:
- host: app2.corp.com
http:
paths:
- backend:
service:
name: app2
port:
number: 8080
path: /
pathType: Prefix
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev
on Kubernetes Slack.
What happened:
Elastic Heartbeat within the cluster perform HEAD requests to https://app.corp.com/context/ every 15 seconds. Application runs in the same cluster. Once per 2-3 hrs there is 502 response.
What you expected to happen:
Succeeded response
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): 1.9.4
Kubernetes version (use
kubectl version
): 1.26.6Environment:
uname -a
): V1-202309.06.0Custom CoreDNS configuration:
app.corp.com resolved to Ingress NGINX Controller ExternalIP
How was the ingress-nginx-controller installed: Helm chart 4.8.3
Current State of the controller:
Current state of ingress object, if applicable:
Others:
How to reproduce this issue:
Anything else we need to know: