kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.35k stars 8.23k forks source link

400 Bad Request - The plain HTTP request was sent to HTTPS port #6822

Closed esteban1983cl closed 2 years ago

esteban1983cl commented 3 years ago

NGINX Ingress controller version: v0.43.0

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Environment:

What happened:

After upgrade helm chart from 3.12.0 to 3.22.0 I get this error message.

What you expected to happen:

The controller works fine.

How to reproduce it: Install helm chart using this values file:

values.yaml ```yaml ## nginx configuration ## Ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/index.md ## ## Overrides for generated resource names # See templates/_helpers.tpl # nameOverride: # fullnameOverride: controller: name: controller image: # I pulled from docker hub to my private repo. repository: registry.gitlab.com/xxxxxxxxxxxxxxx/nginx tag: "v0.43.0" digest: sha256:2b29d459bb978bc773fcfc824a837bb52d6223aba342411d22d61522e58f811b pullPolicy: IfNotPresent # www-data -> uid 101 runAsUser: 101 allowPrivilegeEscalation: true # Configures the ports the nginx-controller listens on containerPort: http: 80 https: 443 # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ config: "enable-real-ip": "true" "force-ssl-redirect": "true" "log-format-escape-json": "true" "log-format-upstream": "{ \"nginx.time\": \"$time_iso8601\", \"nginx.remote_addr\": \"$proxy_protocol_addr\", \"nginx.x-forward-for\": \"$proxy_add_x_forwarded_for\", \"nginx.request_id\": \"$req_id\", \"nginx.remote_user\": \"$remote_user\", \"nginx.bytes_sent\": $bytes_sent, \"nginx.request_time\": $request_time, \"nginx.status\": $status, \"nginx.vhost\": \"$host\", \"nginx.request_proto\": \"$server_protocol\", \"nginx.path\": \"$uri\", \"nginx.request_query\": \"$args\", \"nginx.request_length\": $request_length, \"nginx.duration\": $request_time, \"nginx.method\": \"$request_method\", \"nginx.http_referrer\": \"$http_referer\", \"nginx.http_user_agent\": \"$http_user_agent\", \"nginx.namespace\": \"$namespace\", \"nginx.ingress-name\": \"$ingress_name\", \"nginx.service-name\": \"$service_name\", \"nginx.service-port\": \"$service_port\", \"nginx.request_uri\": \"$request_uri\", \"nginx.scheme\": \"$scheme\", \"nginx.full_url\": \"$http_client_request_url\"}" "proxy-body-size": "50m" "proxy-connect-timeout": "1800" "proxy-read-timeout": "1800" "proxy-real-ip-cidr": "0.0.0.0/0" "proxy-send-timeout": "1800" "redirect-to-https": "true" "use-forwarded-headers": "true" "use-proxy-protocol": "false" "worker-rlimit-nofile": "102400" ## Annotations to be added to the controller config configuration configmap ## configAnnotations: {} # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers proxySetHeaders: {} # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers addHeaders: {} # Optionally customize the pod dnsConfig. dnsConfig: "options": - "name": "dots" "value": "1" # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet. dnsPolicy: ClusterFirst # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply reportNodeInternalIp: false # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm), # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 # is merged hostNetwork: false ## Use host ports 80 and 443 ## Disabled by default ## hostPort: enabled: false ports: http: 80 https: 443 ## Election ID to use for status update ## electionID: ingress-controller-leader-nginx-private ## Name of the ingress class to route through this controller ## ingressClass: nginx-http-private # labels to add to the pod container metadata podLabels: {} # key: value ## Security Context policies for controller pods ## podSecurityContext: {} ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for ## notes on enabling and using sysctls ### sysctls: {} # sysctls: # "net.core.somaxconn": "8192" ## Allows customization of the source of the IP address or FQDN to report ## in the ingress status field. By default, it reads the information provided ## by the service. If disable, the status field reports the IP address of the ## node or nodes where an ingress controller pod is running. publishService: enabled: true ## Allows overriding of the publish service to bind to ## Must be / ## pathOverride: "" ## Limit the scope of the controller ## scope: enabled: false namespace: "" # defaults to .Release.Namespace ## Allows customization of the configmap / nginx-configmap namespace ## configMapNamespace: "" # defaults to .Release.Namespace ## Allows customization of the tcp-services-configmap ## tcp: configMapNamespace: "" # defaults to .Release.Namespace ## Annotations to be added to the tcp config configmap annotations: {} ## Allows customization of the udp-services-configmap ## udp: configMapNamespace: "" # defaults to .Release.Namespace ## Annotations to be added to the udp config configmap annotations: {} # Maxmind license key to download GeoLite2 Databases # https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases maxmindLicenseKey: "" ## Additional command line arguments to pass to nginx-ingress-controller ## E.g. to specify the default SSL certificate you can use ## extraArgs: ## default-ssl-certificate: "/" extraArgs: {} ## Additional environment variables to set extraEnvs: [] # extraEnvs: # - name: FOO # valueFrom: # secretKeyRef: # key: FOO # name: secret-resource ## DaemonSet or Deployment ## kind: Deployment ## Annotations to be added to the controller Deployment or DaemonSet ## annotations: {} # keel.sh/pollSchedule: "@every 60m" ## Labels to be added to the controller Deployment or DaemonSet ## labels: "k8s-app": "nginx-private-http" # keel.sh/policy: patch # keel.sh/trigger: poll # The update strategy to apply to the Deployment or DaemonSet ## updateStrategy: {} # rollingUpdate: # maxUnavailable: 1 # type: RollingUpdate # minReadySeconds to avoid killing pods before we are ready ## minReadySeconds: 0 ## Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" ## Affinity and anti-affinity ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## affinity: "podAntiAffinity": "preferredDuringSchedulingIgnoredDuringExecution": - "podAffinityTerm": "topologyKey": "kubernetes.io/hostname" "weight": 100 "requiredDuringSchedulingIgnoredDuringExecution": - "topologyKey": "topology.kubernetes.io/zone" # # An example of preferred pod anti-affinity, weight is in the range 1-100 # podAntiAffinity: # preferredDuringSchedulingIgnoredDuringExecution: # - weight: 100 # podAffinityTerm: # labelSelector: # matchExpressions: # - key: app.kubernetes.io/name # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/instance # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/component # operator: In # values: # - controller # topologyKey: kubernetes.io/hostname # # An example of required pod anti-affinity # podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # - labelSelector: # matchExpressions: # - key: app.kubernetes.io/name # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/instance # operator: In # values: # - ingress-nginx # - key: app.kubernetes.io/component # operator: In # values: # - controller # topologyKey: "kubernetes.io/hostname" ## Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ ## topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: failure-domain.beta.kubernetes.io/zone # whenUnsatisfiable: DoNotSchedule # labelSelector: # matchLabels: # app.kubernetes.io/instance: ingress-nginx-internal ## terminationGracePeriodSeconds ## wait up to five minutes for the drain of connections ## terminationGracePeriodSeconds: 300 ## Node labels for controller pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: "node.kubernetes.io/role": "system" ## Liveness and readiness probe values ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## livenessProbe: failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 port: 10254 # Path of the health check endpoint. All requests received on the port defined by # the healthz-port parameter are forwarded internally to this path. healthCheckPath: "/healthz" ## Annotations to be added to controller pods ## podAnnotations: {} replicaCount: 2 minAvailable: 2 # Define requests resources to avoid probe issues due to CPU utilization in busy nodes # ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903 # Ideally, there should be no limits. # https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/ resources: "limits": "cpu": "500m" "memory": "1Gi" "requests": "cpu": "500m" "memory": "512Mi" # limits: # cpu: 100m # memory: 90Mi # requests: # cpu: 100m # memory: 90Mi # Mutually exclusive with keda autoscaling autoscaling: enabled: true minReplicas: 3 maxReplicas: 20 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 autoscalingTemplate: [] # Custom or additional autoscaling metrics # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics # - type: Pods # pods: # metric: # name: nginx_ingress_controller_nginx_process_requests_total # target: # type: AverageValue # averageValue: 10000m # Mutually exclusive with hpa autoscaling keda: apiVersion: "keda.sh/v1alpha1" # apiVersion changes with keda 1.x vs 2.x # 2.x = keda.sh/v1alpha1 # 1.x = keda.k8s.io/v1alpha1 enabled: false minReplicas: 1 maxReplicas: 11 pollingInterval: 30 cooldownPeriod: 300 restoreToOriginalReplicaCount: false scaledObject: annotations: {} # Custom annotations for ScaledObject resource # annotations: # key: value triggers: [] # - type: prometheus # metadata: # serverAddress: http://:9090 # metricName: http_requests_total # threshold: '100' # query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) behavior: {} # scaleDown: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 1 # periodSeconds: 180 # scaleUp: # stabilizationWindowSeconds: 300 # policies: # - type: Pods # value: 2 # periodSeconds: 60 ## Enable mimalloc as a drop-in replacement for malloc. ## ref: https://github.com/microsoft/mimalloc ## enableMimalloc: true ## Override NGINX template customTemplate: configMapName: "" configMapKey: "" service: enabled: true annotations: "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags": "Name=nginx-http-private" "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "http" "service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout": "60" "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true" "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0" "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy": "ELBSecurityPolicy-TLS-1-2-2017-01" "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "https" "service.beta.kubernetes.io/aws-load-balancer-type": "elb" labels: {} # clusterIP: "" ## List of IP addresses at which the controller services are available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] enableHttp: true enableHttps: true ## Set external traffic policy to: "Local" to preserve source IP on ## providers supporting it ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer # externalTrafficPolicy: "" # Must be either "None" or "ClientIP" if set. Kubernetes will default to "None". # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies # sessionAffinity: "" # specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified, # the service controller allocates a port from your cluster’s NodePort range. # Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip # healthCheckNodePort: 0 ports: http: 80 https: 443 targetPorts: http: http https: https type: LoadBalancer # type: NodePort # nodePorts: # http: 32080 # https: 32443 # tcp: # 8080: 32808 nodePorts: http: "" https: "" tcp: {} udp: {} ## Enables an additional internal load balancer (besides the external one). ## Annotations are mandatory for the load balancer to come up. Varies with the cloud service. internal: enabled: false annotations: {} # loadBalancerIP: "" ## Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0. loadBalancerSourceRanges: [] ## Set external traffic policy to: "Local" to preserve source IP on ## providers supporting it ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer # externalTrafficPolicy: "" extraContainers: [] ## Additional containers to be added to the controller pod. ## See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example. # - name: my-sidecar # image: nginx:latest # - name: lemonldap-ng-controller # image: lemonldapng/lemonldap-ng-controller:0.2.0 # args: # - /lemonldap-ng-controller # - --alsologtostderr # - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration # env: # - name: POD_NAME # valueFrom: # fieldRef: # fieldPath: metadata.name # - name: POD_NAMESPACE # valueFrom: # fieldRef: # fieldPath: metadata.namespace # volumeMounts: # - name: copy-portal-skins # mountPath: /srv/var/lib/lemonldap-ng/portal/skins extraVolumeMounts: [] ## Additional volumeMounts to the controller main container. # - name: copy-portal-skins # mountPath: /var/lib/lemonldap-ng/portal/skins extraVolumes: [] ## Additional volumes to the controller pod. # - name: copy-portal-skins # emptyDir: {} extraInitContainers: - "command": - "/bin/sh" - "-c" - | sysctl -w net.core.somaxconn=65535; sysctl -w net.ipv4.ip_local_port_range="1024 65535"; sysctl -w net.netfilter.nf_conntrack_tcp_be_liberal=1; ulimit -c unlimited; ulimit -n 1024000; "image": "registry.gitlab.com/cencosud-ds/utils/docker-images/busybox:latest" "name": "configure-kernel" "securityContext": "privileged": true ## Containers, which are run before the app containers are started. # - name: init-myservice # image: busybox # command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;'] admissionWebhooks: annotations: {} enabled: false failurePolicy: Fail # timeoutSeconds: 10 port: 8443 certificate: "/usr/local/certificates/cert" key: "/usr/local/certificates/key" namespaceSelector: {} objectSelector: {} service: annotations: {} # clusterIP: "" externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP patch: enabled: true image: repository: docker.io/jettech/kube-webhook-certgen tag: v1.5.1 pullPolicy: IfNotPresent ## Provide a priority class name to the webhook patching job ## priorityClassName: "" podAnnotations: {} nodeSelector: {} tolerations: [] runAsUser: 2000 metrics: port: 10254 # if this port is changed, change healthz-port: in extraArgs: accordingly enabled: true service: annotations: {} # prometheus.io/scrape: "true" # prometheus.io/port: "10254" # clusterIP: "" ## List of IP addresses at which the stats-exporter service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 9913 type: ClusterIP # externalTrafficPolicy: "" # nodePort: "" serviceMonitor: enabled: true additionalLabels: app: kube-prometheus-stack release: kube-prometheus-stack namespace: "monitoring" namespaceSelector: any: true # Default: scrape .Release.Namespace only # To scrape all, use the following: # namespaceSelector: # any: true scrapeInterval: 30s # honorLabels: true targetLabels: [] metricRelabelings: [] prometheusRule: enabled: true additionalLabels: app: kube-prometheus-stack release: kube-prometheus-stack namespace: "monitoring" rules: # # These are just examples rules, please adapt them to your needs - alert: ConfigFailedNginxPrivateHttp expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0 for: 1s labels: severity: critical annotations: description: bad ingress config - nginx config test failed summary: uninstall the latest ingress changes to allow config reloads to resume - alert: CertificateExpiryNginxPrivateHttp expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800 for: 1s labels: severity: critical annotations: description: ssl certificate(s) will expire in less then a week summary: renew expiring certificates to avoid downtime - alert: TooMany500sNginxPrivateHttp expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 for: 1m labels: severity: warning annotations: description: Too many 5XXs summary: More than 5% of all requests returned 5XX, this requires your attention - alert: TooMany400sNginxPrivateHttp expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5 for: 1m labels: severity: warning annotations: description: Too many 4XXs summary: More than 5% of all requests returned 4XX, this requires your attention ## Improve connection draining when ingress controller pod is deleted using a lifecycle hook: ## With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds ## to 300, allowing the draining of connections up to five minutes. ## If the active connections end before that, the pod will terminate gracefully at that time. ## To effectively take advantage of this feature, the Configmap feature ## worker-shutdown-timeout new value is 240s instead of 10s. ## lifecycle: preStop: exec: command: - /wait-shutdown priorityClassName: "" ## Rollback limit ## revisionHistoryLimit: 10 ## Default 404 backend ## defaultBackend: ## enabled: false name: defaultbackend image: repository: k8s.gcr.io/defaultbackend-amd64 tag: "1.5" pullPolicy: IfNotPresent # nobody user -> uid 65534 runAsUser: 65534 runAsNonRoot: true readOnlyRootFilesystem: true allowPrivilegeEscalation: false extraArgs: {} serviceAccount: create: true name: "" ## Additional environment variables to set for defaultBackend pods extraEnvs: [] port: 8080 ## Readiness and liveness probes for default backend ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ ## livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 ## Node tolerations for server scheduling to nodes with taints ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ ## tolerations: [] # - key: "key" # operator: "Equal|Exists" # value: "value" # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" affinity: {} ## Security Context policies for controller pods ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for ## notes on enabling and using sysctls ## podSecurityContext: {} # labels to add to the pod container metadata podLabels: "k8s-app": "nginx-private-http" # key: value ## Node labels for default backend pod assignment ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: "node.kubernetes.io/role": "system" ## Annotations to be added to default backend pods ## podAnnotations: {} replicaCount: 1 minAvailable: 1 resources: {} # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi autoscaling: enabled: false minReplicas: 1 maxReplicas: 2 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 service: annotations: {} # clusterIP: "" ## List of IP addresses at which the default backend service is available ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips ## externalIPs: [] # loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP priorityClassName: "default-node-critical" ## Enable RBAC as per https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx and https://github.com/kubernetes/ingress/issues/266 rbac: create: true scope: false # If true, create & use Pod Security Policy resources # https://kubernetes.io/docs/concepts/policy/pod-security-policy/ podSecurityPolicy: enabled: false serviceAccount: create: true name: "" ## Optional array of imagePullSecrets containing private registry credentials ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ imagePullSecrets: - "name": "docker-pull-secrets" # TCP service key:value pairs # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tcp ## tcp: {} # 8080: "default/example-tcp-svc:9000" # UDP service key:value pairs # Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp ## udp: {} # 53: "kube-system/kube-dns:53" # A base64ed Diffie-Hellman parameter # This can be generated with: openssl dhparam 4096 2> /dev/null | base64 # Ref: https://github.com/krmichel/ingress-nginx/blob/master/docs/examples/customization/ssl-dh-param dhParam: ```

Anything else we need to know:

/kind bug

esteban1983cl commented 3 years ago

OK, I solved myself this issue adding the follow configuration in my values.yaml helm chart file. I pickup the configuration from https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/aws/deploy-tls-termination.yaml

# Configures the ports the nginx-controller listens on
  containerPort:
    http: 80
    https: 443
    tohttps: 2443
---
config: 
  http-snipet: |
    server {
    listen 2443;
    return 308 https://$host$request_uri;
  }
---
service:
  targetPorts:
    # http: http
    http: tohttps
    https: http
aSapien commented 3 years ago

@esteban1983cl thank you! This helped me so much! I've been failing at configuring NLB TLS offloading for almost 3 weeks now! All my attempts were to forward to port 80 from the NLB using AWS annotations, all to no avail.

Can you explain this workaround?

Nuru commented 3 years ago

@esteban1983cl Please re-open this issue. I'm glad you found a workaround, but that should not be required, so this is still a bug.

Please also hide your helm values in a details block like this

<details><summary>values.yaml</summary>

`​``yaml
here be yaml
`​``
</details>

which will look like this:

values.yaml ```yaml here be yaml ```
aSapien commented 3 years ago

@Nuru this appears to be the idiomatic way of doing TLS offloading on NLB with ingress-nginx:

https://github.com/kubernetes/ingress-nginx/blob/a7fb791132a0dee285b47d0041f4f9acf1e7cff8/deploy/static/provider/aws/deploy-tls-termination.yaml#L291-L299

stealthHat commented 3 years ago

@aSapien, can you share the values that you use to make the nlb tls offloading work? im trying to accomplish the same task, but with no success 3 weeks in :(

aSapien commented 3 years ago

@stealthHat please see my config below.

NOTE: Make sure you're not trying to install the other, similar helm chart by mistake

controller:
  containerPort:
    http: 80
    https: 443
    tohttps: 2443
  config:
    http-snippet: |
      server {
        listen 2443;
        return 308 https://$host$request_uri;
      }
  proxy-real-ip-cidr: XXX.XXX.XXX/XX
  use-forwarded-headers: 'true'
  service:
    externalTrafficPolicy: Local # Only connect to nodes which run the `nginx-ingress` pods. Avoids extra hop.
    targetPorts:
      http: tohttps
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${ tls_cert_arn }
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    name: ${ nginx_service_name }
stealthHat commented 3 years ago

@aSapien after I install the ingress nginx with this values, except with the addition of external DNS annotation and without the aws-load-balancer-internal: "true", because im using EKS with private and public networks, and i want to be able to access the url trough internet.

but out of 3 ingress that i created, im only able to access one of then, maybe DNS propagation time? i will wait until tomorrow to see

stealthHat commented 3 years ago

it not work, im not able to access any url now, am i missing something? the dns on route53 is fine and the nlb is internet facing

aSapien commented 3 years ago

@stealthHat I suggest performing some debugging and/or contacting AWS Support.

I would try the following (no particular order):

The above should help you rule out some potential causes. If you still don't find the root cause, AWS Support should be able to walk you through some extra steps that might be specific to your use-case or configuration.

Good luck!

stealthHat commented 3 years ago

oh never mind, it was bad configuration on ACM, after create a valid certificate it works fine. tks nway for the help.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

iamNoah1 commented 3 years ago

Hi @esteban1983cl @aSapien @Nuru @stealthHat do you guys still consider this being an issue?

esteban1983cl commented 3 years ago

Hi, @iamNoah1 I thinks that it still and issue. The follow code snippet doesn't allow manage insecure traffic I mean, http requests.

config:
    http-snippet: |
      server {
        listen 2443;
        return 308 https://$host$request_uri;
      }

All the traffic include insecure requests will redirected over https protocol. This break the feature that keep insecure traffic. We need other kind of solution for AWS Load Balancers.

baudlord commented 3 years ago

Hi @esteban1983cl @aSapien @Nuru @stealthHat do you guys still consider this being an issue?

Hey! This issue hit me recently, and I think @esteban1983cl's solution should be readily available on the Helm chart, configurable via a bool value maybe. Either that, or adding it to the chart's documentation.

I could check out your contributor's documentation and work out a PR if you guys want.

All the traffic include insecure requests will redirected over https protocol. This break the feature that keep insecure traffic. We need other kind of solution for AWS Load Balancers.

For your specific use case, maybe force-ssl-redirect is a better solution.

You should undo your changes (setting controller.service.targetPort.http to http), and instead add the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" to the specific Ingress objects you need SSL redirection on. This way, those Ingress objects will get a HTTP 308 while the rest will forward to the unencrypted HTTP service.

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

iamNoah1 commented 3 years ago

@baudlord we are happy for any contribution. Go for it :)

iamNoah1 commented 3 years ago

/remove-lifecycle rotten

iamNoah1 commented 2 years ago

@esteban1983cl @baudlord can you folks confirm that this is still an issue also with newer supported versions of ingress nginx? Otherwise we are still happy for any contribution :)

iamNoah1 commented 2 years ago

/close

Closing due to inactivity. Feel free to open a new issue.

k8s-ci-robot commented 2 years ago

@iamNoah1: Closing this issue.

In response to [this](https://github.com/kubernetes/ingress-nginx/issues/6822#issuecomment-994666318): >/close > >Closing due to inactivity. Feel free to open a new issue. Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.