Closed JimtotheB closed 4 years ago
ssl-redirect: "false" # we use `special` port to control ssl redirection server-snippet: | listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; }
Did you set externalTrafficPolicy: "Local" or "Cluster" ?
And do I need to annotate every deployment ingress also with: nginx.ingress.kubernetes.io/server-snippet: | listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; } ?
@dardanos
@ssh2n Local
or Cluster
are not matter for ssl-redirection.
If you want all services to have ssl-redirection, you just put this on server-snippet
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
But if you prefer to select which services are required ssl-redirection, then you need only
listen 8000;
And leave the 308
redirection to nginx.ingress.kubernetes.io/server-snippet
annotation
controller.config.server-snippet
will add config to all nginx server while nginx.ingress.kubernetes.io/server-snippet
annotation will add to only annotated server
Hey!
I'm new to Kubernetes and so I'm not familiar with helm.
Fortunately after banging my head a bit I was able to adapt the great solutions in here to a regular yaml config.
Sharing in case it saves someone else some time!
# Download the mandatory nginx config as per usual
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
# Add a container port to the deployment that comes with the nginx mandatory yaml file
# This port will be used for redirecting as @Kongz describes above
kubectl patch deployment -n ingress-nginx nginx-ingress-controller --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/ports/-","value":{"name": "https-to-http", "containerPort": 8000, "protocol": "TCP"}}]'
Then for my production overlay I use patchesStrategicMerge
to add this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
# https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295
nginx.ingress.kubernetes.io/server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
ports:
# https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295
# https://superuser.com/a/1519548
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https-to-http
I'm just patching because my prod environment needs redirect but my development one does not - if yhat isn't a concern you can just drop the above straight into your service.
I got stuck on this for a very long time as a beginner - so if you're reading this and you're also stuck feel free to ping!
@KongZ Right, I am using an NLB. However, the NGINX controller is still a L7 reverse proxy that forwards its own X-Forwarded-*
headers. Here is a snippet from my NGINX:
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
...
proxy_set_header X-Forwarded-Port $pass_port;
And because we are serving HTTPS over port 8000, it is forwarding the port as 8000 instead of 443.
@ssh2n every deployment you do with helm should have the annotation.
Thanks @dardanos, that was a bit confusing, so I switched back to the classic L7 setup :)
@walkafwalka, ran into the same issue as you with apps which depend on X-Forwarded-Port. Solution below sets proxy_port instead of server_port which comes by default. In my case Jenkins with Keycloak redirection had port 8000. This solved it:
location-snippet: |
set $pass_server_port $proxy_port;
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
ssl-redirect: "false"
I came across this although I am NOT using service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
.
I have an NLB
listening on HTTPS
& HTTP
which forwards requests as HTTP
to NGINX which I have configured in turn to forward all traffic to port http
(80
).
My ingress is configured with "nginx.ingress.kubernetes.io/force-ssl-redirect": "true"
for SSL redirection and I am getting stuck in a redirect loop.
The issue was closed without recommending what workaround to apply in which context. It also doesn't mention whether or how it will be addressed without a workaround.
For my specific case, I assume that because I am NOT using service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
, NGINX keeps thinking it should redirect. But, even when I do configure service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
it still gets stuck in a redirect.
The nginx.com website documents an annotation that I haven't seen mentioned elsewhere, namely nginx.org/redirect-to-https
and even with that, things didn't work for me.
Also having service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
for my NLB doesn't seem to enable ProxyProtocol v2
on the listeners but I haven't tested it with an ELB
.
So in total I have two issues:
My configuration looks like this using Helm charts:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: whatever
namespace: default
spec:
chart:
repository: https://kubernetes.github.io/ingress-nginx/
name: ingress-nginx
version: 2.11.1
values:
config:
proxy-real-ip-cidr:
- "10.2.0.0/20"
controller:
service:
targetPorts:
https: http
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<AWS_ARN>"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
and the ingress object itself:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: whatever
namespace: default
spec:
rules:
- host: <CUSTOM_HOST>
http:
paths:
- path: /
backend:
serviceName: <WHATEVER>
servicePort: 80
Appreciate kindly your advise.
@abjrcode It is on my answer. It is the complete solution. Just configure ingress-nginx
value file and your app ingress according to my coment.
https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295
Thank you @KongZ for your suggestion. I will provide some more guidance for people coming across this and more options as I had a chance to take a thorough look at the code.
There are two choices for load balancers, at least when it comes to AWS. I am assuming you want to terminate TLS at the load balancer level and we're dealing strictly with HTTPS & HTTP. If you are interested in TCP, UDP then please check this insightful comment on this very issue.
ELB (although Classis and will be completely deprecated at some point), probably for historical reasons, actually forwards the X-Forwarded-*
headers.
The NGINX controller actually supports and can do redirection based on those headers. Here's how your configuration would look like with Helm:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: <RELEASE_NAME>
namespace: <NAMESPACE>
spec:
chart:
repository: https://kubernetes.github.io/ingress-nginx/
name: ingress-nginx
version: 2.11.1
values:
config:
ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
use-forwarded-headers: "true" # NGINX will now decide whether it will do redirection based on these headers
controller:
service:
targetPorts:
https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: <INGRESS_NAME>
namespace: <NAMESPACE>
spec:
rules:
- host: <CUSTOM_HOST>
http:
paths:
- path: /
backend:
serviceName: <WHATEVER>
servicePort: <SOME_PORT>
There are two choices when it comes to NLBs. Unfortunately, at least from my point of view, the preferred option isn't available at the time of this writing because of this open issue
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: <RELEASE_NAME>
namespace: <NAMESPACE>
spec:
chart:
repository: https://kubernetes.github.io/ingress-nginx/
name: ingress-nginx
version: 2.11.1
values:
config:
ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
use-proxy-protocol: "true" # NGINX will now decide whether it will do redirection based on these headers
controller:
service:
targetPorts:
https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: <INGRESS_NAME>
namespace: <NAMESPACE>
spec:
rules:
- host: <CUSTOM_HOST>
http:
paths:
- path: /
backend:
serviceName: <WHATEVER>
servicePort: <SOME_PORT>
Please check @KongZ comment on this issue.
Thanks @KongZ it works fine with NLB Here is the changes I did for the one who does not use charm. I deployed ingress-nginx from https://github.com/kubernetes/ingress-nginx/blob/controller-v0.34.1/deploy/static/provider/aws/deploy.yaml
kubectl edit configmaps -n ingress-nginx ingress-nginx-controller
Add the following lines (Note: data
section does not exist by default)
data:
server-snippet: |
listen 8000;
ssl-redirect: "false"
Complete configmap as a reference:
apiVersion: v1
data:
server-snippet: |
listen 8000;
ssl-redirect: "false"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":null,"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/i
nstance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.34.1","helm.sh/chart":"
ingress-nginx-2.11.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
creationTimestamp: "2020-08-03T17:29:25Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.34.1
helm.sh/chart: ingress-nginx-2.11.1
name: ingress-nginx-controller
namespace: ingress-nginx
2. Edit ingress-nginx deployment
`kubectl edit deployments -n ingress-nginx ingress-nginx-controller`
Add the following lines in `ports:` section
More lines from deployments.
livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: controller ports:
When you save&exit deployments, it will create new ingress-nginx pod.
finally, add the following annotations lines into your app ingress
nginx.ingress.kubernetes.io/server-snippet: |
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
Complete app ingress yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apple-ingress
namespace: apple
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
spec:
rules:
- host: apple.mydomain.com
http:
paths:
- path: /
backend:
serviceName: apple-service
servicePort: 5678
then update it kubectl apply -f ingress-apple.yml
And let's test it
$ curl -I http://apple.mydomain.com
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:47:59 GMT
Content-Type: text/html
Content-Length: 171
Connection: keep-alive
Location: https://apple.mydomain.com/
$ curl -I https://apple.mydomain.com
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:48:20 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 15
Connection: keep-alive
X-App-Name: http-echo
X-App-Version: 0.2.3
@mjooz
@walkafwalka, ran into the same issue as you with apps which depend on X-Forwarded-Port. Solution below sets proxy_port instead of server_port which comes by default. In my case Jenkins with Keycloak redirection had port 8000. This solved it:
location-snippet: | set $pass_server_port $proxy_port; server-snippet: | listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; } ssl-redirect: "false"
I ran into the same problem, it works for most of services except Keycloak, tried adding
location-snippet: |
set $pass_server_port $proxy_port;
to the ingress-nginx configmap as you suggested but still not work, any advice?
@ssh2n
Local
orCluster
are not matter for ssl-redirection. If you want all services to have ssl-redirection, you just put this onserver-snippet
listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; }
But if you prefer to select which services are required ssl-redirection, then you need only
listen 8000;
And leave the
308
redirection tonginx.ingress.kubernetes.io/server-snippet
annotation
controller.config.server-snippet
will add config to all nginx server whilenginx.ingress.kubernetes.io/server-snippet
annotation will add to only annotated server
controller.config.server-snippet
is not working with the latest (0.45.0) Helm chart as reported here https://github.com/kubernetes/ingress-nginx/issues/6829 and you need to include the snippet in every ingress.
@ngocketit got anything?
@samrakshak As I mentioned, putting the server snippet in Helm didn't work for me so I had to put it in every ingress (with nginx.ingress.kubernetes.io/server-snippet
annotation) and it worked for me.
@ngocketit now I am getting the following error:
error:1408F10B:SSL routines:ssl3_get_record:wrong version number
I have used a AWS NLB and the certificate is being issued by certmanager/let's encrypt . I want TLS termination but I think as TLS is not termination I am facing this issue.
Using helm ingress-nginx chart on EKS
Edit configmap ingress-nginx-controller
kubectl edit configmap ingress-nginx-controller -n ingress-nginx
Add
data: server-snippet: | listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; } ssl-redirect: "false"
Edit service/ingress-nginx-controller by adding
meta.helm.sh/release-namespace: ingress-nginx service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <acm arn> service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https service.beta.kubernetes.io/aws-load-balancer-type: nlb
Setup your port in the ingress controller to look like what I have below: NB: special port is what you are going to add to the ingress containerPort ports:
Now Edit ingress controller deployment containerPort
kubectl edit deployment.apps/ingress-nginx-controller -n ingress-nginx
Add:
@Ariseaz - I applied the suggested workaround and its working. Thank you @Ariseaz
But , Today I installed pgadmin service in my EKS cluster and its redirection to special port 8000. Could you please suggest where I am doing mistake.
Below are the yamls
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin
namespace: tools
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin
strategy:
type: Recreate
template:
metadata:
labels:
app: pgadmin
spec:
initContainers:
- name: pgadmin-data-permission-fix
image: busybox
command: ["/bin/chown", "-R", "5050:5050", "/var/lib/pgadmin"]
volumeMounts:
- name: pgadminstorage
mountPath: /var/lib/pgadmin
containers:
- name: pgadmin
image: dpage/pgadmin4
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/pgadmin
name: pgadminstorage
ports:
- name: pgadmin
containerPort: 5050
protocol: TCP
env:
- name: PGADMIN_LISTEN_PORT
value: "5050"
- name: PGADMIN_DEFAULT_EMAIL
value: admin
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin
key: pgadmin-password
volumes:
- name: pgadminstorage
persistentVolumeClaim:
claimName: pgadminstorage
service.yaml
---
apiVersion: v1
kind: Service
metadata:
namespace: tools
name: pgadmin
spec:
type: ClusterIP
ports:
- name: pgadmin
port: 5050
selector:
app: pgadmin
resource ingress yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pgadmin
namespace: tools
annotations:
external-dns.alpha.kubernetes.io/ttl: "60"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.org/location-snippets: |
proxy_set_header HOST $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://pgadmin.abc.abc.com:5050;
proxy_read_timeout 200s;
spec:
ingressClassName: external-nginx
rules:
- host: pgadmin.abc.abc.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pgadmin
port:
number: 5050
Followed @Ariseaz suggestion and I was able to redirect to https but it did not work when I enable Proxy protocol v2 on the NLB for forwarding the client real ip.
I was able to get it it fixed both https redirct and client ip by following this page https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
Change this in service
`spec: externalTrafficPolicy: Local ports:
and this in deployment
`ports:
Hope this helps for anyone looking for similar solution
@KongZ could you explain please why we can't use just this ?
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
why we even need 8000 port ?
also why we need this
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
instead of
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
@StepanKuksenko I'm no longer using nginx controllers for years. Hope this description is still valid.
why we even need 8000 port ?
Because we need 80 to handle HTTP and response 308. And 8000 is needed to handle HTTPS. The 443 cannot be used because it is used by nginx to terminate TLS. This case we want to terminate TLS on NLB.
┌─────┐ ┌─────────┐ ┌───────┐
┌──┴─┐ │ ┌───┴┐ │ ┌───┴─┐ │
───http──▶│:80 │───┼─http─▶│:80 │────────┼───▶│ :80 │ │
└──┬─┘ │ └───┬┘ │ └───┬─┘ │
│ │ │ │ ┌───┴─┐ │
│ NLB │ │ Service │ │:443 │Pod │
│ │ │ │ └───┬─┘ │
┌──┴─┐ │ ┌───┴┐ │ ┌───┴─┐ │
───https─▶│:443│───┼─http─▶│:443│────────┼───▶│:8000│ │
└──┬─┘ │ └───┬┘ │ └───┬─┘ │
└─────┘ └─────────┘ └───────┘
also why we need this "tcp"
Because it is a spec. It accepts only "ssl" or "tcp"
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Issues #2000 an #1957 touch on this, with #1957 suggesting its was fixed. Searched 308, redirect, TCP, aws, elb, proxy etc.
NGINX Ingress controller version: v0.16.2
Kubernetes version (use
kubectl version
): v1.9.6Environment: AWS
What happened:
With this ingress that creates an ELB handling TLS termination.
And these nginx settings asking for
force-ssl-redirect
requesting
http://example.com
will result in a 308 redirect loop. withforce-ssl-redirect: false
it works fine, but no http -> https redirect.What you expected to happen:
I expect http://example.com to be redirected to https://example.com by the ingress controller.
How to reproduce it (as minimally and precisely as possible):
Spin up an example with the settings above, default backend, ACM cert and dummy Ingress for it to attach to. then attempt to request the
http://
emdpoint.