Closed Jc2k closed 2 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
I think PR https://github.com/nginxinc/kubernetes-ingress/pull/1512 will address this
Isn't that a different project though?
Had a look at their doc. Yes, the mentioned PR is on Nignx's ingress project. Probably the same logic can be incorporated here too.
We can think of adding a variable like SSLRejectHandshake
in config.go
Then adding a logic under the default server to set it.
{{ if eq $server.Hostname "_" }}
ssl_reject_handshake {{ if $cfg.SSLRejectHandshake }}on{{ else }}off{{ end }};
{{ end }}
BUT, the problem we will run into will be; the /nginx_status
and /healthz
endpoints won't be accessible via HTTPS.
Those are kept under default server stanza and access to these endpoints via HTTPS will fail when ssl_reject_handshake
is "on".
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
@ansilh do you consider opening a PR for it?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
It would still be nice if this was implemented so that clusters running nginx-ingress weren't easy to fingerprint, like this:
/remove-lifecycle stale
Hi @iamNoah1 , I've submitted a PR, but looks like there are few pending tasks to complete, including the CLA part. This is my first PR, so I might need some time to go through it.
I imagine the team will want it to not break /nginx_status
and /healthz
before they accept it, but fingers cross they have an idea on how you can fix it...
@Jc2k, To serve those urls with SSL handshake rejection enabled , we need a valid certificate for those urls (and also a way to give those certs). If we serve the Fake certificate for those, there is no point in enabling this, right?
So what I'd expect is for the setting to apply to 80 and 443, but health checks are on a different high port and that can still have the default cert (these are not exposed externally to the cluster so its fine).
E.g. looking at the manifest, the /healthz
check is used by a livenessProbe
and readinessProbe
on port 10254. The probe seems to be setup for http, not https, so hopefully enabling this can still lead that url work on port 10254? Otherwise the pod gets marked as unhealthy all the time.
I'm not sure of a use case for there to be a default vhost on 80/443 at all though, to be clear. I'd be happy breaking those urls on the public facing ports, just not on 10254.
If 10254 health checks work its fine I think.
Yes, the internal healthz is running on a nonstandard port, but the one for external is explicitly documented to be exposed to port 80 already, so it's covered there.
The nginx status is allowed for local host only, so it's not exposed to outside. Looks to me like this nginx_status is designed for a sidecar container.
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
Any other thoughts / suggestions?
Interesting, thanks for digging into this. From your research into this nothing requires /healthz
or /nginx_status
on port 443. Thanks for updating the PR to point that out :) Hopefully the team will like this change.
/lifecycle active
/triage accepted /priority important-longterm
Thanks @Jc2k for the issue and @ansilh for the PR.
Let's keep the discussion about this implementation on the PR, (keeping this issue opened until the PR gets merged!)
Also, this makes me think about maybe we should write some blogpost/doc/whatever with sane and good security configurations in ingress. We don't want to break users/cases, but I'm seeing a lot of new improvements/attempts to harden ingress nginx and I'm really thankful for that, so I think we should announce that and ask people to use those new parameters :)
We have a hardening guide (based on CIS benchmark) ->
https://kubernetes.github.io/ingress-nginx/deploy/hardening-guide/
There is one item in the guide that CIS can improve to include ssl_reject_handshake,
but this needs to be done on the Nginx core first by CIS.
2.4.2 Ensure requests for unknown hostnames are rejected (Not Scored)
Original request Ref:- https://github.com/kubernetes/ingress-nginx/pull/5881
If we keep CIS aside, then we can add a section in the hardening doc to add new configuration directives related to hardening like below;
Parameter | Description | Default | Change default |
---|---|---|---|
ssl-reject-handshake | Reject SSL handshake to unknown host to mitigate fingerprinting of Nginx ingress using fake default certificate. | "false" | Refer doc , or Example |
Right now AIUI connections directly to an IP will use the default cert and go to the default backend. By default this is relatively harmless (apart from /healthz it is just 404s). But i'd prefer not to return anything at all.
I want to be able to set the new
ssl_reject_handshake
setting on the defaultserver
stanza via the ingress config map. By setting this new flag those connections (which are almost certainly bots and scanners) will be dropped before even returning the 'Kubernetes Ingress Controller Fake Certificate'.(I'm not proposing this as a security improvement, as security through obscurity is not, though obviously it would certainly be nice to not appear on this list).
Cannot find any existing related issues.
Does not require any k8s features, but does require nginx ≥ 1.19.4.
/kind feature