Closed rittneje closed 1 month ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@longwuyuan Is there any plan to fix this? It is still preventing us from switching to the chroot image.
@rittneje it would help tons if you pasted the kubectl describe
output of the objects involved and the curl request
intended to be sent, with expected response.
I think there are 2 factors impacting this.
I may have to test but your comments could also be interpreted as implying a broken annotation backend-protocol: HTTPS
. Will check when I get time
@longwuyuan I'm not sure what you mean. This issue has nothing to do with curl or the backend-protocol
annotation. The problem is as follows:
configuration-snippet
annotation.@rittneje , thanks for claification, helps a lot.
To double-check, I was hoping to see a kubectl describe
output of this attempt. Apologies for asking again, but are you referring to the field ingress.spec.rules.http.paths.backend.service
, configured with a value of a name of a service of --type externalName
@longwuyuan I'm assuming you are looking for the Ingress spec? If so, it looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "my-ingress"
namespace: "my-namespace"
annotations:
nginx.ingress.kubernetes.io/service-upstream: "false"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
nginx.ingress.kubernetes.io/proxy-send-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "45"
nginx.ingress.kubernetes.io/upstream-vhost: "[redacted]"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_name "[redacted]";
proxy_ssl_server_name on;
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/ssl/cert.pem;
proxy_ssl_verify_depth 5;
proxy_ssl_protocols TLSv1.2 TLSv1.3;
spec:
ingressClassName: nginx
rules:
- host: "[redacted]"
http:
paths:
- path: "/(.*)"
pathType: ImplementationSpecific
backend:
service:
name: "my-service"
port:
number: 443
tls:
- hosts:
- "[redacted]"
---
kind: Service
apiVersion: v1
metadata:
name: "my-service"
namespace: "my-namespace"
spec:
type: ExternalName
externalName: "[redacted]"
Thanks. This must be a important part of your infra. Is it possible to get some elaborate info on your use case. As in why would you first send a request to a Kubernetes Cluster, only to have that traffic be redirected to something else somewhere else, outside the cluster (assuming you are externalName defining something that might as well be on the internet). Your request could go directly to that externalName so wondering the value added by bouncing off of an ingress.
It's so we can have those services on the same hostname. (And we don't want to rely on clients properly supporting redirects.)
Thanks and its working fine now on the non-chrooted image and not on the chrooted image.
I think this will be impacted by the change intended, right after the stabilization work is complete. /triage-accepted
Just to flesh the issue with more small details, can you kindly comment on why you need to specify /etc/ssl/cert.pem if the externalName target is a public server with a cert from a standard trusted CA like digicert/letsencrypt etc.
/triage accepted
Just to flesh the issue with more small details, can you kindly comment on why you need to specify /etc/ssl/cert.pem if the externalName target is a public server with a cert from a standard trusted CA like digicert/letsencrypt etc.
Validating the server certificate is a security best practice to confirm we are connected to the correct server and not a malicious one. And since the server's certificate is issued by a public CA, we want to use the standard root of trust rather than having to maintain our own.
ok. So this has to be presented differently, I wonder. If a chrooted controller can not validate a well known CA issues cert, in the case of the backend being a exernalName, then I wonder if a chrooted controller can validate any well known cert at all. If true, then even using the annotation backend-protocol: HTTPS
( with a backend pod presenting a letsencrypt certificate) should fail because the chrooted controller can not validate the letencrypt cert in the backend pod.
Did I confuse the whole thing or is my assumption above correct. Asking because if this is the case, then it needs to be presented as a caveat to use chrooted controller.
By default nginx does not validate the server certificate. (See #7083.) Hence it would not fail, but you would be vulnerable to some exploits. And if you are using a custom root of trust via the proxy-ssl-secret
annotation, then (I assume?) it will work.
One workaround is to mount a config map with the root of trust at some location, but I don't know enough about the chroot image to say if that would actually work, or where it would have to be mounted to be accessible/visible to nginx.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
We recently had to determine that we will stop publishing chrooted image as a lot of the planned protection is getting implemented in the regular image.
Also the lack of resources to work on the project has caused decisions like deprecating features that are hard to support & maintain being far from the K8S Ingress-API spec. The project has to also focus on implementing the Gateway-API. As such there is no pending action item being tracked in this issue. So I am closing this issue. Thanks.
/close
@longwuyuan: Closing this issue.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): 1.2.0
Kubernetes version (use
kubectl version
): 1.21Environment:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release): AWS EKS
Kernel (e.g.
uname -a
):Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
kubectl apply
What happened:
Because of #6728, we are not able to use the various proxy-ssl-* annotations in order to configure TLS for the connection to the upstreams. In particular, we want to use the standard operating system root of trust, and we are not using mTLS. Due to this bug, we are currently manually configuring TLS using a snippet, which includes:
Since the chroot image does not allow nginx to access that path, we get an error:
What you expected to happen:
/etc/ssl/cert.pem needs to be accessible by the nginx pod in order to support proxies that don't use mTLS.
Until this is fixed, chroot cannot be enabled by default.
How to reproduce it:
Configure an Ingress with an
nginx.ingress.kubernetes.io/configuration-snippet
annotation as described above.Anything else we need to know:
@rikatz @longwuyuan