Closed esteban1983cl closed 2 years ago
OK, I solved myself this issue adding the follow configuration in my values.yaml helm chart file. I pickup the configuration from https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/aws/deploy-tls-termination.yaml
# Configures the ports the nginx-controller listens on
containerPort:
http: 80
https: 443
tohttps: 2443
---
config:
http-snipet: |
server {
listen 2443;
return 308 https://$host$request_uri;
}
---
service:
targetPorts:
# http: http
http: tohttps
https: http
@esteban1983cl thank you! This helped me so much! I've been failing at configuring NLB TLS offloading for almost 3 weeks now! All my attempts were to forward to port 80 from the NLB using AWS annotations, all to no avail.
Can you explain this workaround?
@esteban1983cl Please re-open this issue. I'm glad you found a workaround, but that should not be required, so this is still a bug.
Please also hide your helm values in a details
block like this
<details><summary>values.yaml</summary>
```yaml
here be yaml
```
</details>
which will look like this:
@Nuru this appears to be the idiomatic way of doing TLS offloading on NLB with ingress-nginx:
@aSapien, can you share the values that you use to make the nlb tls offloading work? im trying to accomplish the same task, but with no success 3 weeks in :(
@stealthHat please see my config below.
NOTE: Make sure you're not trying to install the other, similar helm chart by mistake
controller:
containerPort:
http: 80
https: 443
tohttps: 2443
config:
http-snippet: |
server {
listen 2443;
return 308 https://$host$request_uri;
}
proxy-real-ip-cidr: XXX.XXX.XXX/XX
use-forwarded-headers: 'true'
service:
externalTrafficPolicy: Local # Only connect to nodes which run the `nginx-ingress` pods. Avoids extra hop.
targetPorts:
http: tohttps
https: http
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${ tls_cert_arn }
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
name: ${ nginx_service_name }
@aSapien after I install the ingress nginx with this values, except with the addition of external DNS annotation and without the aws-load-balancer-internal: "true", because im using EKS with private and public networks, and i want to be able to access the url trough internet.
but out of 3 ingress that i created, im only able to access one of then, maybe DNS propagation time? i will wait until tomorrow to see
it not work, im not able to access any url now, am i missing something? the dns on route53 is fine and the nlb is internet facing
@stealthHat I suggest performing some debugging and/or contacting AWS Support.
I would try the following (no particular order):
netcat
to the NLB IPs on the public port.curl
and perform a TLS handshake. nginx-ingress
from within the cluster. ingress-nginx
log level kubectl describe service
and verify that effective configurations match the expectations.The above should help you rule out some potential causes. If you still don't find the root cause, AWS Support should be able to walk you through some extra steps that might be specific to your use-case or configuration.
Good luck!
oh never mind, it was bad configuration on ACM, after create a valid certificate it works fine. tks nway for the help.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Hi @esteban1983cl @aSapien @Nuru @stealthHat do you guys still consider this being an issue?
Hi, @iamNoah1 I thinks that it still and issue. The follow code snippet doesn't allow manage insecure traffic I mean, http requests.
config:
http-snippet: |
server {
listen 2443;
return 308 https://$host$request_uri;
}
All the traffic include insecure requests will redirected over https protocol. This break the feature that keep insecure traffic. We need other kind of solution for AWS Load Balancers.
Hi @esteban1983cl @aSapien @Nuru @stealthHat do you guys still consider this being an issue?
Hey! This issue hit me recently, and I think @esteban1983cl's solution should be readily available on the Helm chart, configurable via a bool value maybe. Either that, or adding it to the chart's documentation.
I could check out your contributor's documentation and work out a PR if you guys want.
All the traffic include insecure requests will redirected over https protocol. This break the feature that keep insecure traffic. We need other kind of solution for AWS Load Balancers.
For your specific use case, maybe force-ssl-redirect
is a better solution.
You should undo your changes (setting controller.service.targetPort.http
to http
), and instead add the nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
to the specific Ingress objects you need SSL redirection on. This way, those Ingress objects will get a HTTP 308 while the rest will forward to the unencrypted HTTP service.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@baudlord we are happy for any contribution. Go for it :)
/remove-lifecycle rotten
@esteban1983cl @baudlord can you folks confirm that this is still an issue also with newer supported versions of ingress nginx? Otherwise we are still happy for any contribution :)
/close
Closing due to inactivity. Feel free to open a new issue.
@iamNoah1: Closing this issue.
NGINX Ingress controller version: v0.43.0
Kubernetes version (use
kubectl version
):Environment:
uname -a
):What happened:
After upgrade helm chart from 3.12.0 to 3.22.0 I get this error message.
What you expected to happen:
The controller works fine.
How to reproduce it: Install helm chart using this values file:
values.yaml
```yaml ## nginx configuration ## Ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/index.md ## ## Overrides for generated resource names # See templates/_helpers.tpl # nameOverride: # fullnameOverride: controller: name: controller image: # I pulled from docker hub to my private repo. repository: registry.gitlab.com/xxxxxxxxxxxxxxx/nginx tag: "v0.43.0" digest: sha256:2b29d459bb978bc773fcfc824a837bb52d6223aba342411d22d61522e58f811b pullPolicy: IfNotPresent # www-data -> uid 101 runAsUser: 101 allowPrivilegeEscalation: true # Configures the ports the nginx-controller listens on containerPort: http: 80 https: 443 # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ config: "enable-real-ip": "true" "force-ssl-redirect": "true" "log-format-escape-json": "true" "log-format-upstream": "{ \"nginx.time\": \"$time_iso8601\", \"nginx.remote_addr\": \"$proxy_protocol_addr\", \"nginx.x-forward-for\": \"$proxy_add_x_forwarded_for\", \"nginx.request_id\": \"$req_id\", \"nginx.remote_user\": \"$remote_user\", \"nginx.bytes_sent\": $bytes_sent, \"nginx.request_time\": $request_time, \"nginx.status\": $status, \"nginx.vhost\": \"$host\", \"nginx.request_proto\": \"$server_protocol\", \"nginx.path\": \"$uri\", \"nginx.request_query\": \"$args\", \"nginx.request_length\": $request_length, \"nginx.duration\": $request_time, \"nginx.method\": \"$request_method\", \"nginx.http_referrer\": \"$http_referer\", \"nginx.http_user_agent\": \"$http_user_agent\", \"nginx.namespace\": \"$namespace\", \"nginx.ingress-name\": \"$ingress_name\", \"nginx.service-name\": \"$service_name\", \"nginx.service-port\": \"$service_port\", \"nginx.request_uri\": \"$request_uri\", \"nginx.scheme\": \"$scheme\", \"nginx.full_url\": \"$http_client_request_url\"}" "proxy-body-size": "50m" "proxy-connect-timeout": "1800" "proxy-read-timeout": "1800" "proxy-real-ip-cidr": "0.0.0.0/0" "proxy-send-timeout": "1800" "redirect-to-https": "true" "use-forwarded-headers": "true" "use-proxy-protocol": "false" "worker-rlimit-nofile": "102400" ## Annotations to be added to the controller config configuration configmap ## configAnnotations: {} # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers proxySetHeaders: {} # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers addHeaders: {} # Optionally customize the pod dnsConfig. dnsConfig: "options": - "name": "dots" "value": "1" # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet. dnsPolicy: ClusterFirst # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply reportNodeInternalIp: false # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm), # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 # is merged hostNetwork: false ## Use host ports 80 and 443 ## Disabled by default ## hostPort: enabled: false ports: http: 80 https: 443 ## Election ID to use for status update ## electionID: ingress-controller-leader-nginx-private ## Name of the ingress class to route through this controller ## ingressClass: nginx-http-private # labels to add to the pod container metadata podLabels: {} # key: value ## Security Context policies for controller pods ## podSecurityContext: {} ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for ## notes on enabling and using sysctls ### sysctls: {} # sysctls: # "net.core.somaxconn": "8192" ## Allows customization of the source of the IP address or FQDN to report ## in the ingress status field. By default, it reads the information provided ## by the service. If disable, the status field reports the IP address of the ## node or nodes where an ingress controller pod is running. publishService: enabled: true ## Allows overriding of the publish service to bind to ## Must beAnything else we need to know:
/kind bug