Closed davem-git closed 1 month ago
/assign harry1064
/triage accepted /priority backlog
@strongjz: GitHub didn't allow me to assign the following users: harry1064.
Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
/assign
@strongjz @longwuyuan
I am able to reproduce this issue.
Root Cause:
When path is specified in Ingress without host and rewrite annotation is specified, all the paths will go under servername `and
location ~* "^/"get added just before the
/healthz` which causes 404 Not found issue.
Thanks. It was expected and kind of makes simple logical sense based on how server & location blocks get configured, in the general scheme of the controller lua pieces. In my opinion this is a academic scene and not a real-world scene. But others may differ. My vote is wait for Ricardo to complete all the cp/dp split work and then ask if we want to allow this kind of use-case to begin with.
But that I just my opinion. Time & energy allocation is left to you and other opinions.
Facing the same issue. Following.
checking in as its been 2 months now. Is there a timeline on this? Its preventing us from implementing rewrites without a host
We would also like to know if there is any timeline here.
I think additional info will be a step forward. Even though @harry1064 mentioned his test, personally, I feel the details are not elaborate enough. For example, can anyone confirm ;
That the use case here is ;
ingress.spec.rules.host
not defined/
, /home
, /about
, /contact
in same one single ingress resource configuredAlso, please confirm that you want all traffic to cluster routed using this one single ingress resource rules
% k explain ingress.spec.rules.host
KIND: Ingress
VERSION: networking.k8s.io/v1
FIELD: host
DESCRIPTION: Host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the "host" part of the URI as defined in RFC 3986: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress.
:
delimiter is not respected because ports are not allowed.
Currently the port of an Ingress is implicitly :80 for http and :443 for
https. Both these may change in the future. Incoming requests are matched
against the host before the IngressRuleValue. If the host is unspecified,
the Ingress routes all traffic based on the specified IngressRuleValue.
@longwuyuan to answer your questions.
Ingress resource created with the field ingress.spec.rules.host not defined
Correct
Multiple paths like /, /home, /about, /contact in same one single ingress resource configured
I believe this is incorrect for any ingresses with the nginx.ingress.kubernetes.io/rewrite-target
which would make multiple paths rewrite to the same place.
Only one single ingress resource existing in the entire cluster
Incorrect, each service will have its own ingress, with non conflicting paths for example /api/${SERVICENAME}/
And that one single ingress resource is also configured with the rewrite annotation
since the above is incorrect this statement doesn't really apply. an individual ingress would could have a rewrite annotation. However we do not have 1 ingress per cluster.
please let me know if anything above is not clear.
Also, please confirm that you want all traffic to cluster routed using this one single ingress resource rules
incorrect
@longwuyuan did my response clear anything up? is there any follow up questions I can answer?
@davem-git , it looks like you will create several ingress resources and none of them will be configured with a host filed and all of them will have multiple rules for routing based on path only and there will be only one rule for one path.
Is this a correct ?
that's mostly correct. We will have many ingress resources, without a hostname. Some of those ingresses will have multiple rules for routing based on paths. Some of them will have a single rule based on a path. Some of them will have the write target nginx.ingress.kubernetes.io/rewrite-target: /$2
When this write target is inplace, on an ingress with without a hostname, the built in nginx health endpoint is no longer reachable
Same issue for me, like @davem-git
need to used routing based on paths, and as soon as the rewrite-target is in place ,
the healthz is not reachable anymore, making our front loadbalancer to stop forwarding traffics.
This can be seen as a security issue, letting anyone able to write ingress (when annotation enable) to DOS all Apps serve by ingress.
i test lots of thing using annotation, but was not able to find the right way. So any help appreciate.
On Wed, Oct 19, 2022 at 7:15 PM Sebastien Aucouturier < @.***> wrote:
Same issue for me, like @davem-git https://github.com/davem-git need to used routing based on paths, and as soon as the rewrite-target is in place , the healthz is not reachable anymore, making our front loadbalancer to stop forwarding traffics.
i test lots of thing using annotation, but was not able to find the right way. So any help appreciate.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8852#issuecomment-1284042878, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSCBAZ6RBYPBQIHKULWD73PLANCNFSM54HW4RAQ . You are receiving this because you were mentioned.Message ID: @.***>
-- ; Long Wu Yuan
No Host, No TLS in my case. I do put the host as we only serve one FQDN, will try adding it to see how it goes.
- It may help to state the use case of creating ingress resources without using the host field - do you use TLS ? … On Wed, Oct 19, 2022 at 7:15 PM Sebastien Aucouturier < @.> wrote: Same issue for me, like @davem-git https://github.com/davem-git need to used routing based on paths, and as soon as the rewrite-target is in place , the healthz is not reachable anymore, making our front loadbalancer to stop forwarding traffics. i test lots of thing using annotation, but was not able to find the right way. So any help appreciate. — Reply to this email directly, view it on GitHub <#8852 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSCBAZ6RBYPBQIHKULWD73PLANCNFSM54HW4RAQ . You are receiving this because you were mentioned.Message ID: @.> -- ; Long Wu Yuan
Is this in production? Also may help if you describe what blocks you from using a host field value, without DNS record, and resolving that host value to ipaddress, using curl --resolv flag or /etc/hosts entries, if not in production.
Thanks, ; Long
On Wed, 19 Oct, 2022, 7:47 PM Sebastien Aucouturier, < @.***> wrote:
No Host, No TLS in my case.
- It may help to state the use case of creating ingress resources without using the host field - do you use TLS ? … <#m8629445450024649221> On Wed, Oct 19, 2022 at 7:15 PM Sebastien Aucouturier < @.> wrote: Same issue for me, like @davem-git https://github.com/davem-git https://github.com/davem-git https://github.com/davem-git need to used routing based on paths, and as soon as the rewrite-target is in place , the healthz is not reachable anymore, making our front loadbalancer to stop forwarding traffics. i test lots of thing using annotation, but was not able to find the right way. So any help appreciate. — Reply to this email directly, view it on GitHub <#8852 (comment) https://github.com/kubernetes/ingress-nginx/issues/8852#issuecomment-1284042878>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWSCBAZ6RBYPBQIHKULWD73PLANCNFSM54HW4RAQ https://github.com/notifications/unsubscribe-auth/ABGZVWSCBAZ6RBYPBQIHKULWD73PLANCNFSM54HW4RAQ . You are receiving this because you were mentioned.Message ID: @.> -- ; Long Wu Yuan
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8852#issuecomment-1284091461, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWTSNHSIVVK2E36AUF3WD77F5ANCNFSM54HW4RAQ . You are receiving this because you were mentioned.Message ID: @.***>
Ours is already in production for one of our products, and it works great. We don't need to use the rewrite annotation. This is the part which I'm stressing is caused by a bug in the health check.
What prevents us from using the host field is internal routing. As of now we do use the host field, and our services communicating to other services internal to the cluster resolve the public ip. The traffic leaves the clusters egress and goes back in through the ingress to route each service. This is very inefficient.
Switching to path based routing allows external calls to use kubernetes-public.domain.com/api/servicename. and internal calls allow http://ingress-nginx-internal.ingress-nginx.svc.cluster.local/api/servicename.
Besides that with the hostname we occur a large cost with connections. Having and end user connect to 130 different hostnames which are all on the same clusters, as we are currently doing, is causing performance issues.
@longwuyuan i will open a new issue , to no poluate that one, as i try with host and tls and got same behaviour. @davem-git i hope you to find a workaround.
hmm i don't seem to have that issue, with a hostname. Your health check fails with a redirect and a hostname?
Thanks @seb-835
We are also experiencing this issue
i made some progress on this. If we specify a default backend, it seems to eliminate this problem
i made some progress on this. If we specify a default backend, it seems to eliminate this problem
can you explain how you do ?
update the helm values to include the default backend. With that extra setting and deployment, I no longer see the issue.
defaultBackend:
enabled: true
@seb-835 did it fix your issue?
@davem-git this did solve the issue for me, thx a lot! Quick question though: would this be
This should work without the default backed. It's a regex issue. I think the contributors mentioned some upcoming changes planned that will fix it too. So they don't want to do anything now. I'm using this as a final fix. It seems more thorough of a health check to me.
The health probes on our load balancer began working after adding this annotation:
controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path=/healthz
and this setting:
defaultBackend.Enabled=true
to our ingress controllers
So, guys, this issue is still alive. What I had:
AWS
;EKS
;v1.25.7-eks-a59e1f0
;apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
selector:
matchLabels:
app: nginx
replicas: 1 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: public.ecr.aws/nginx/nginx
ports:
- name: app-port
containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: default
labels:
app: nginx
spec:
selector:
app: nginx
type: ClusterIP
ports:
- name: service-port
port: 80
targetPort: 80
protocol: TCP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ing
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: testing
rules:
- http:
paths:
- path: /nginx(/|$)(.*)
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
controller:
kind: DaemonSet
minAvailable: 2
ingressClassResource:
name: testing
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx-testing"
parameters: {}
ingressClass: "testing"
autoscaling:
enabled: false
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
service:
type: NodePort
enableHttp: true
enableHttps: true
nodePorts:
http: 33080
https: 33443
tcp:
8080: 33808
defaultBackend:
enabled: true
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx-testing
topologyKey: "kubernetes.io/hostname"
Trying to install different Helm chart versions of ingress-nginx-controller: 4.5.1
, 4.5.2
, 4.6.0
and 4.6.1
. It does not help. Saw in pod of ingress-nginx:
/etc/nginx $ curl -v 127.0.0.1/healthz
GET /healthz HTTP/1.1 Host: 127.0.0.1 User-Agent: curl/8.0.1 Accept: /
< HTTP/1.1 404 Not Found < Date: Wed, 10 May 2023 23:44:01 GMT < Content-Type: text/html < Content-Length: 146 < Connection: keep-alive <
404 Not Found 404 Not Found
nginx
But, when I add to ingress.yaml
-> host: somedns.example.com
or remove annotation nginx.ingress.kubernetes.io/rewrite-target
, everything starts working well. So, maybe someone can assist me how to overcome this without adding host
.
/remove-kind bug If image used in deployment is nginx and that image has noting like a path in the webroot for /health, I am not sure it points to a problem in the controller.
I think some more clear and specific test data and clear and specific comments are needed to point at a problem in the controller. Once we know the problem clearly, there is possibility to fix it.
So, @longwuyuan, found out what causes it. Just typo from my side that leads that default backend isn't created.
I can confirm that just enabling defaultBackend: true
assist with this issue.
So, you can close this issue, since I guess this thread provides the needed details how to overcome this issue.
p.s. the issue was that I put defaultBackend
section inside the controller
, but it needs to be like:
controller:
<needed section for controller>
defaultBackend:
enabled: true
So, you can close this issue, since I guess this thread provides the needed details how to overcome this issue.
Wait what. Dont close it as we want the initial bug to be solved. I have added the defaultBackend
myself to get the /healthz
endpoint to respond, but I do not want those additional pods running.
If I have an ingress that looks something like this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /foo(/|$)(.*)
pathType: Prefix
backend:
service:
name: foo-service
port:
number: 80
- path: /bar(/|$)(.*)
pathType: Prefix
backend:
service:
name: bar-service
port:
number: 80
Then the generated /etc/nginx/nginx.conf
file in ingress-nginx-controller will look something like this
# Global filters
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=4096 ;
listen [::]:80 default_server reuseport backlog=4096 ;
listen 443 default_server reuseport backlog=4096 ssl http2 ;
listen [::]:443 default_server reuseport backlog=4096 ssl http2 ;
set $proxy_upstream_name "-";
ssl_reject_handshake off;
ssl_certificate_by_lua_block {
certificate.call()
}
location ~* "^/foo(/|$)(.*)" {
# removed stuff
}
location ~* "^/bar(/|$)(.*)" {
# removed stuff
}
location ~* "^/" {
# Removed stuff but pretty much proxy to defaultBackend
}
location /healthz {
access_log off;
return 200;
}
Where I will never hit the /healthz
location, only ~* "^/"
(unless the location starts with /foo or /bar of course). And if I dont have a defaultBackend, then ~* "^/"
location will timeout.
But, If I dont have that ingress with rewrite at all, then the /etc/nginx/nginx.conf
file in ingress-nginx-controller will look something like this
# Global filters
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=4096 ;
listen [::]:80 default_server reuseport backlog=4096 ;
listen 443 default_server reuseport backlog=4096 ssl http2 ;
listen [::]:443 default_server reuseport backlog=4096 ssl http2 ;
set $proxy_upstream_name "-";
ssl_reject_handshake off;
ssl_certificate_by_lua_block {
certificate.call()
}
location "/" {
# Removed stuff but pretty much proxy to defaultBackend
}
location /healthz {
access_log off;
return 200;
}
where I am able to hit /healthz
location. As such, I don't need the defaultBackend, which means one (possibly more) less pods.
This is exactly what @harry1064 said 10 months ago.
Wait what. Dont close it as we want the initial bug to be solved.
Oh, yeah @ubbeK , sorry, my fault. Did not want to put this while texting the answer. Wanted just to close a dialogue between me and @longwuyuan, since for now, this workaround is fine.
We just experienced the same issue after upgrading our ingress nginx controller to 1.8.0 (via Helm chart 4.7.0). One of our ingress controllers had rewrites configured without a host and after upgrading it effectively broke all traffic to that ingress controller as healh probe started to respond with 404 which in effect made our cloud load balancer to think all our nodes are unhealthy.
This must have been working correctly previously as we ran this setup for a long time on older version of ingress nginx controller.
Why not to use port 10254 for healthz?
httpGet:
path: /healthz
port: 10254
scheme: HTTP
I use Helm 4.8.0 with TargetGroupBinding and in target group I use same healthcheck (:10254/healthz) and it works fine. (sorry if misunderstud the original problem)
Root cause: in my case the problem was different configuration of AppGW. AppGW that is connected to older cluster had health-probes configured via IP Address, while AppGW connected to newer cluster was configured to "Pick host name from backend settings".
We are also experiencing similar issue (404 from /healthz
that is defined in server _
) after moving our workloads to another cluster that has ingress nginx controller v1.8.1 (deployed via Helm chart 4.7.1).
Same service/Ingress is working fine on cluster that has controller v1.5.1.
Note: we do have "host" defined.
Ingress configuration of service/workloads is using rewrites:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
meta.helm.sh/release-name: prj.cloud.svc
meta.helm.sh/release-namespace: t3-prj-dev
nginx.ingress.kubernetes.io/proxy-cookie-domain: /svc/ /
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/x-forwarded-prefix: /svc
creationTimestamp: "2023-11-07T06:23:46Z"
generation: 8
labels:
app: prj-cloud-svc
app.kubernetes.io/managed-by: Helm
chart: prj-cloud-svc-1.0.0-990eece0
heritage: Helm
k8slens-edit-resource-version: v1
release: prj.cloud.svc
name: prj-cloud-svc
namespace: t3-prj-dev
resourceVersion: "67892967"
uid: b9f27c71-560f-441b-a547-962db008b029
spec:
ingressClassName:dev-public-shared-nginx-v2
rules:
- host: prj.dev.domain.com
http:
paths:
- backend:
service:
name: prj-cloud-svc
port:
number: 8080
path: /svc(/|$)(.*)
pathType: ImplementationSpecific
When we apply above config, /healthz
is returning 404
to AppGW.
If we delete this ingress configuration, /healthz
is returning 200
to AppGW.
It looks like with this config applied, location /healthz
section in side of _
server stops working.
As a Workaround, we are able to make it work by manually adding following snippet to our ingress yaml config:
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/healthz" {
return 200;
}
But then inside nginx.conf
it's not part of _
server definition, but rather being added to our "application" server definition that is reserved for pods running in one particular namespace (t3-prj-dev
).
Hi,
Does anyone copied on this issue still have thoughts or data on this issue that is updated to current release.
After reading part of the posts, I am a little confused between the first original description of the issue reported here and much of the later messages and posts here.
To re-start engaging, I am posting now that I get to see 400 response to a /healthz path, if I send a request to a existing hostname on a ingress. This is because the backend specified for that ingress does not have anything at /hea;thz .
And if I send a request to the external-ipaddress of the LoadBalancer service with path as /healthz then I get a 200 response but no html
% curl 192.168.49.2/healthz -v
* Trying 192.168.49.2:80...
* Connected to 192.168.49.2 (192.168.49.2) port 80 (#0)
> GET /healthz HTTP/1.1
> Host: 192.168.49.2
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 22 Aug 2024 10:42:30 GMT
< Content-Type: text/html
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host 192.168.49.2 left intact
[~]
% k describe ing test0
Name: test0
Labels: <none>
Namespace: default
Address: 192.168.49.2
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ test0:80 (10.244.0.27:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 57s (x3 over 27h) nginx-ingress-controller Scheduled for sync
This is on a non-custom install of the controller. And the version of the controller is v1.11.2 .
This data makes the original-post issue-description some kind of a non-starter because if there is no hostname configured, then there is no clear though-process and real-world use-case data on what destination hostname or destination ipaddress the request (with path /helathz) was sent to.
I will close this issue for now. @davem-git , you can test with the latest controller release and post the data that is a real-world use-case and explain the details of what hostname or ipaddress was receiving the curl request with path /healthz .
I understand that you said you will create ingress with no host field but to take any action now or to analyze any bug, I think we only discuss with real data.
/close
@longwuyuan: Closing this issue.
@davem-git you can re-open if you want after you have edited the issue description with the data that can be analyzed. Please know that just having * as value in host field is not valid and so its critical for you to explain what hostname or ipaddress you sent your request to, with the path set as /healthz
What happened:
When creating an ingress without a host, and using
nginx.ingress.kubernetes.io/rewrite-target: /$2
the /healthz endpoint no longer works it give a 404What you expected to happen:
I would expect the default /healthz endpoint to not be affected
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
```sh ------------------------------------------------------------------------------- NGINX Ingress controller Release: v1.2.1 Build: 08848d69e0c83992c89da18e70ea708752f21d7a Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.10 ------------------------------------------------------------------------------- ``` **Kubernetes version** (use `kubectl version`): ```sh ❯ kubectl version Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"5a97ee6d15525f6e4a1c2646bf1dfd2ebd5220b5", GitTreeState:"clean", BuildDate:"2022-06-15T04:26:33Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"} ``` **Environment**: - **Cloud provider or hardware configuration**: -- Azure - **OS** (e.g. from /etc/os-release): ```sh NAME="Ubuntu" VERSION="18.04.6 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.6 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic ``` - **Kernel** (e.g. `uname -a`): ```sh Linux aks-pool1az0-16738398-vmss00001A 5.4.0-1083-azure #87~18.04.1-Ubuntu SMP Fri Jun 3 13:19:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ``` - **Install tools**: - `Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc. ` - terraform. AKS - **Basic cluster related info**: - `kubectl version` ```sh Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"5a97ee6d15525f6e4a1c2646bf1dfd2ebd5220b5", GitTreeState:"clean", BuildDate:"2022-06-15T04:26:33Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"} ``` - `kubectl get nodes -o wide` ```sh ❯ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-pool0-34788694-vmss000000 Ready agent 21d v1.23.5 10.101.64.4