Closed AlesKrajnik closed 2 months ago
No, there is no way to configure the ingress-nginx-controller to do that.
@longwuyuan Thanks for a quick answer!
Given that the desired behaviour was introduced for HTTP/HTTPS traffic, could it be also feature-requested for TCP/UDP connection?
I am asking since I don't know the implementation differences in HTTP vs TCP connections. It might be that it's a potential feature request, or it might be that it's not possible due to some low-level architectural decisions or product decisions.
Currently the project is in a feature-freeze phase doing stabilization work so no features will be worked on till end of the 2022 or a even a few months after. You are already aware that the TCP/UDP support opens a port on the controller and you already know that the reverseproxy component is nginx. So I can't even imagine how not only this feature but any layer7 behaviour can be altered/implemented. But I am not a developer so wait for other comments on that aspect. Thanks.
/kind feature
/remove-kind feature /triage unresolved
/kind feature
/help
This is an interesting feature we may look to support in the future, but right now we need someone to investigate this further.
@strongjz: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
/priority backlog /triage accepted
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The project had a feature for forwarding TCP/UDP traffic but that feature is deprecated and will be removed in the next release of the controller.
The integration of Istio and this controller is not something that the project has resources for. Also the project is moving to implement Gateway API and this distance itself from features if possible, that are not a spec of the K8S KEP Ingress API.
Hence there is no actionable item from this issue and it only adds to the tally of open issues. I will close this issue for now. The original creator of the issue can re-open the issue by posting data that is more inline with the comments above to demonstrate a bug or a problem needing to be solved in the controller. Thanks.
/close
@longwuyuan: Closing this issue.
Hi,
I am using
ingress-nginx
v1.3.0 (installed with Helm chart v4.2.3) as a load balancer on DigitalOcean's Kubernetes. In the Kubernetes cluster, I am runningistio
v1.15.0 service mesh. The mesh runs with peer authenticationmTLS
mode set to STRICT (traffic must be properly encrypted).I was able to successfully connect nginx with istio service mesh for HTTP/HTTPS traffic, but I didn't find a way how to do the same for TCP traffic. If I am not mistaken, there's a feature missing to allow that.
When
nginx
forwards HTTP traffic, it can either send it to the K8s pods (Pod IP/port) or to the K8s services (Cluster IP/port) using annotationsnginx.ingress.kubernetes.io/service-upstream: "true"
andnginx.ingress.kubernetes.io/upstream-vhost: "..."
on theIngress
object (as documented here).Without these annotation, istio's Envoy sidecar on the nginx pod does consider the traffic directed to the pod as going inside the mesh and won't encrypt it correctly, so the upstream Envoy sidecar won't accept the traffic due to the mTLS mode set to "STRICT". With the annotations, Envoy encrypts the traffic correctly and upstream Envoy receives it correctly.
I am trying to solve the same for TCP services. It seems that nginx sends the TCP traffic directly to the pods, same as for HTTP/HTTPS traffic (when the annotations are not set). However, TCP traffic forwarding is not defined in an Ingress object, so it's not possible to apply the annotations mentioned above. This results in the TCP traffic not being correctly encrypted by Envoy: downstream Envoy will deliver it to the upstream Envoy of the pod, but the upstream Envoy will drop the traffic because downstream Envoy did not encrypt the traffic and the peer authentication configuration requires it.
The solution would be in instructing nginx to send the TCP traffic to the service endpoint, Envoy would then encrypt the traffic and deliver to the upstream correctly.
Is there a way how to configure nginx to do that?
Thank you!