Closed JohnGalt1717 closed 3 years ago
We have a difficult PCI compliance issue that they are rejecting the self-signed cert because of the default backend.
You can replace the default backend flag --default-backend-service
and also set a default SSL certificate --default-ssl-certificate
so you don't get that error
https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/
We've tried everything we can think of to ensure that a direct IP address won't go to the default backend without fail.
1 and 2 are not valid options because the server block is not reached when you only use an IP address.
@aledbf Is there a way to update the deploy without having to disrupt production to set that?
And really I'd suggest that there should just be a way to completely disable the default backend service because it's a security vulnerability. Especially for https.
Is there a way to update the deploy without having to disrupt production to set that?
No. Also, I suggest you test the change before any change in production
And really I'd suggest that there should just be a way to completely disable the default backend service because it's a security vulnerability. Especially for https.
"default backend" in HTTPS is reached only when there is no host configured OR when the client doesn't support SNI (that is what you see using the IP address).
Any special handling must be done by the user.
@aledbf what does "Any special handling must be done by the user"?
This is a by default security vulnerability. The ingress controller shouldn't respond at all by default. Anything else is a security issue. At the very least this should be able to be turned off but it should be off by default.
As it stands right now, since there is basically 0 documentation on this on how to control anything, no site using this can pass PCI because of this.
What happens if I force shut down the default backend pod? Will kubernetes freak out because /healthz won't respond?
How does one setup the default backend so that it doesn't behave with insecure defaults? I can't find any documentation at all anywhere on how to control this thing that shouldn't be there in the first place.
This is a by default security vulnerability.
How you reach this conclusion?
The ingress controller shouldn't respond at all by default.
Why? What it means, from a user perspective no response at all? The default backend is there to return 404 in the two scenarios I mentioned in a previous comment.
What happens if I force shut down the default backend pod?
The default backend is optional. If you don't set one NGINX just returns 404.
How does one setup the default backend so that it doesn't behave with insecure defaults?
What insecure defaults?
@aledbf PCI compliance MUST have a valid SSL certificate attached to ALL responses or not respond at all. Since you cannot create a valid SSL certificate for the IP address as per the above, then this will always be self-signed which under PCI is verboten. It is insecure because it's responding to HTTPs requests with a cert that can never be valid and thus never validated. That's a security issue by default. It means that 100% of PCI network scans that don't put nginx behind another firewall to block this are going to fail and rightfully so.
There's never a case that I want my kubernetes responding to something that I haven't explicitly configured. If I want "all" then I'll define it as *.xxx in the rules. I don't want it responding to stuff without my explicit permission. I.e. deny by default is the only possible safe security policy in a production public environment.
How do I not set one? I didn't set a default backend when I configured it originally yet it insisted on creating this default anyhow. I want to disable this entirely at creation so it can't respond unless I tell it to.
Responding on http and https without configuration and returning anything (404 or a 200 on /healthz) is an insecure default by definition. At the very least it allows bots to identify that there is something there to attack instead of getting no response which is what it should get.
If people want to opt into this (I don't have a clue why you would, just setup your own in your configuration that responds to everything, not hard) they should be able to, but it shouldn't ever be on by default in any configuration.
Please at least consider adding a flag to outright disable it.
Since you cannot create a valid SSL certificate for the IP address as per the above, then this will always be self-signed which under PCI is verboten
You can use a custom template https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/ to just return status code 444 https://httpstatuses.com/444 in the default server block
Again, per PCI it can't return anything. It MUST get no response. If it responds at all, you'll get a fail from 100% of network scan providers saying that the certificate is invalid and reject you.
And if I use kubectl and delete the pod from the default backend it just gets recreated so I can't even prevent the damn thing from exposing what it shouldn't.
And I can't even write an ingress configuration to capture this state and redirect the request to a service. It just ignores it and responds with the default backend.
So no matter what I try there's no way of preventing this insecure response that ensures that this can't be used in production if you require PCI compliance.
From the link https://httpstatuses.com/444
@aledbf So your solution is to force people to hack in volumes (of which the documentation is bad) to make it do what it should do by default?
How is this not major security bug? There is never a case where you want a direct IP access to return HTTPS in a production environment. Never. If you're doing so, then you're doing it wrong and are going to get burned.
Is this the flag that I need to use when creating this again with helm to prevent this from ever being created so I don't have to hack volumes?
BTW, installing with the helm chart and disabling the default backend like so:
helm install nginx-ingress stable/nginx-ingress --namespace ingress-basic --set controller.replicaCount=2 --set controller.nodeSelector."beta.kubernetes.io/os"=linux --set defaultBackend.nodeSelector."beta.kubernetes.io/os"=linux --set controller.service.loadBalancerIP="xx.yy.zz.aa" --set defaultBackend.enabled="false" --set configmap="controller-configmap"
Doesn't work either. It still responds on the IP address with a 404 an an insecure certificate. Just a different 404 page than what the default backend replies with.
So no matter what you do, this is still an issue that prevents this from ever being used in production and getting PCI certification.
(disabling the default backend doesn't solve this either)
Considering that PCI/DSS certification is not the main use case of most of the user base of this product, I'd say it is fine to expect people that want this to do some more hands on work. Considering the intention of the PCI DSS certification, you will probably want to kick out all kinds of stuff from the default template anyway.
That being said it appears you forgot to disable the default backend in the initial issue request, which is entirely possible with this helm chart. OTOH you could enable it, supply an image of your choice and have that respond with 444.
It would be nice though to have some easier means via this chart, to add configuration to the default server block of the configuration.
@Arabus It should never be fine to have a known security vulnerability.
If you disable the default backend, then you just get a different 404 error that is also no PCI compliant. So even that doesn't work.
It shouldn't respond at all.
I dispute that responding to unconfigured hostnames is a "known security vulnerability".
Responding with a selfsigned SSL certificate is not a vulnerability in of itself but standard behaviour of X509. Your client issuing a warning about an untrusted CA is also standard behaviour. Thats how certificates work.
imo PCI DSS is doing some ass covering here by requiring you to do security by obscurity. Not responding to HTTP requests to your IP Adress is annoying at best but will not prevent detection or fingerprinting your service. You most likely have at least one reverse DNS entry which will result in at least one known valid hostname.
Just because $somesecurityaudit requires you to prevent something doesn't mean it is a problem. Most of the time it's compliance i.e. legal ass covering.
I'd be inclined to change my mind if someone were to supply compelling documentation on the issue though.
I dispute that responding to unconfigured hostnames is a "known security vulnerability".
Responding with a selfsigned SSL certificate is not a vulnerability in of itself but standard behaviour of X509. Your client issuing a warning about an untrusted CA is also standard behaviour. Thats how certificates work.
imo PCI DSS is doing some ass covering here by requiring you to do security by obscurity. Not responding to HTTP requests to your IP Adress is annoying at best but will not prevent detection or fingerprinting your service. You most likely have at least one reverse DNS entry which will result in at least one known valid hostname.
Just because $somesecurityaudit requires you to prevent something doesn't mean it is a problem. Most of the time it's compliance i.e. legal ass covering.
I'd be inclined to change my mind if someone were to supply compelling documentation on the issue though.
I would be inclined to agree with you on the above points except for the fact the Nginx ingress already does try to include such audit ass covering with the modsecurity rules being supported out of the box via configmap among other things. I also see no reason form a functionality standpoint to return anything when navigating to an ingress controller directly via the IP because this can be used to identify attack vectors independent of an audit. E.g. some version of this ingress controller is vulnerable at some stage and people can crawl IPs looking for vulnerable ingress nginx.
IMO some sort of configmap value e.g. disable_direct_ip_access = true would disable all requests to the assigned IP for the ingress controller.
And I might add, the argument about whether or not it’s a security issue is immaterial. A huge portion of the use of this ingress is for sites that must be pci dss compliant.
Hence it doesn’t make a hill of beans difference if it is or isn’t. I tested 5 certification systems that are certified by pci to do so. 100% of them fail you if they get a self-signed cert back.
Hence the only reasonable solution is to not respond to ip or provide a trivial way to hook up a valid publicly recognized cert. the current hack I came up with isn’t a viable long term solution as it takes way too many steps and requires deep plumbing.
I tried the suggested return 444 and created an nginx.tmpl:
{{ $all := . }}
{{ $servers := .Servers }}
{{ $cfg := .Cfg }}
{{ $IsIPV6Enabled := .IsIPV6Enabled }}
{{ $healthzURI := .HealthzURI }}
{{ $backends := .Backends }}
{{ $proxyHeaders := .ProxySetHeaders }}
{{ $addHeaders := .AddHeaders }}
# Configuration checksum: {{ $all.Cfg.Checksum }}
# setup custom paths that do not require root access
pid {{ .PID }};
{{ if $cfg.UseGeoIP2 }}
load_module /etc/nginx/modules/ngx_http_geoip2_module.so;
{{ end }}
{{ if (shouldLoadModSecurityModule $cfg $servers) }}
load_module /etc/nginx/modules/ngx_http_modsecurity_module.so;
{{ end }}
{{ if $cfg.EnableOpentracing }}
load_module /etc/nginx/modules/ngx_http_opentracing_module.so;
{{ end }}
daemon off;
worker_processes {{ $cfg.WorkerProcesses }};
{{ if gt (len $cfg.WorkerCPUAffinity) 0 }}
worker_cpu_affinity {{ $cfg.WorkerCPUAffinity }};
{{ end }}
worker_rlimit_nofile {{ $cfg.MaxWorkerOpenFiles }};
{{/* http://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout */}}
{{/* avoid waiting too long during a reload */}}
worker_shutdown_timeout {{ $cfg.WorkerShutdownTimeout }} ;
{{ if not (empty $cfg.MainSnippet) }}
{{ $cfg.MainSnippet }}
{{ end }}
events {
multi_accept {{ if $cfg.EnableMultiAccept }}on{{ else }}off{{ end }};
worker_connections {{ $cfg.MaxWorkerConnections }};
use epoll;
}
http {
lua_package_path "/etc/nginx/lua/?.lua;;";
{{ buildLuaSharedDictionaries $cfg $servers }}
init_by_lua_block {
collectgarbage("collect")
-- init modules
local ok, res
ok, res = pcall(require, "lua_ingress")
if not ok then
error("require failed: " .. tostring(res))
else
lua_ingress = res
lua_ingress.set_config({{ configForLua $all }})
end
ok, res = pcall(require, "configuration")
if not ok then
error("require failed: " .. tostring(res))
else
configuration = res
end
ok, res = pcall(require, "balancer")
if not ok then
error("require failed: " .. tostring(res))
else
balancer = res
end
{{ if $all.EnableMetrics }}
ok, res = pcall(require, "monitor")
if not ok then
error("require failed: " .. tostring(res))
else
monitor = res
end
{{ end }}
ok, res = pcall(require, "certificate")
if not ok then
error("require failed: " .. tostring(res))
else
certificate = res
end
ok, res = pcall(require, "plugins")
if not ok then
error("require failed: " .. tostring(res))
else
plugins = res
end
-- load all plugins that'll be used here
plugins.init({})
}
init_worker_by_lua_block {
lua_ingress.init_worker()
balancer.init_worker()
{{ if $all.EnableMetrics }}
monitor.init_worker()
{{ end }}
plugins.run()
}
{{/* Enable the real_ip module only if we use either X-Forwarded headers or Proxy Protocol. */}}
{{/* we use the value of the real IP for the geo_ip module */}}
{{ if or $cfg.UseForwardedHeaders $cfg.UseProxyProtocol }}
{{ if $cfg.UseProxyProtocol }}
real_ip_header proxy_protocol;
{{ else }}
real_ip_header {{ $cfg.ForwardedForHeader }};
{{ end }}
real_ip_recursive on;
{{ range $trusted_ip := $cfg.ProxyRealIPCIDR }}
set_real_ip_from {{ $trusted_ip }};
{{ end }}
{{ end }}
{{ if $all.Cfg.EnableModsecurity }}
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;
{{ if $all.Cfg.EnableOWASPCoreRules }}
modsecurity_rules_file /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf;
{{ else if (not (empty $all.Cfg.ModsecuritySnippet)) }}
modsecurity_rules '
{{ $all.Cfg.ModsecuritySnippet }}
';
{{ end }}
{{ end }}
{{ if $cfg.UseGeoIP }}
{{/* databases used to determine the country depending on the client IP address */}}
{{/* http://nginx.org/en/docs/http/ngx_http_geoip_module.html */}}
{{/* this is require to calculate traffic for individual country using GeoIP in the status page */}}
geoip_country /etc/nginx/geoip/GeoIP.dat;
geoip_city /etc/nginx/geoip/GeoLiteCity.dat;
geoip_org /etc/nginx/geoip/GeoIPASNum.dat;
geoip_proxy_recursive on;
{{ end }}
{{ if $cfg.UseGeoIP2 }}
# https://github.com/leev/ngx_http_geoip2_module#example-usage
geoip2 /etc/nginx/geoip/GeoLite2-City.mmdb {
$geoip2_city_country_code source=$remote_addr country iso_code;
$geoip2_city_country_name source=$remote_addr country names en;
$geoip2_city source=$remote_addr city names en;
$geoip2_postal_code source=$remote_addr postal code;
$geoip2_dma_code source=$remote_addr location metro_code;
$geoip2_latitude source=$remote_addr location latitude;
$geoip2_longitude source=$remote_addr location longitude;
$geoip2_time_zone source=$remote_addr location time_zone;
$geoip2_region_code source=$remote_addr subdivisions 0 iso_code;
$geoip2_region_name source=$remote_addr subdivisions 0 names en;
}
geoip2 /etc/nginx/geoip/GeoLite2-ASN.mmdb {
$geoip2_asn source=$remote_addr autonomous_system_number;
$geoip2_org source=$remote_addr autonomous_system_organization;
}
{{ end }}
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout {{ $cfg.KeepAlive }}s;
keepalive_requests {{ $cfg.KeepAliveRequests }};
client_body_temp_path /tmp/client-body;
fastcgi_temp_path /tmp/fastcgi-temp;
proxy_temp_path /tmp/proxy-temp;
ajp_temp_path /tmp/ajp-temp;
client_header_buffer_size {{ $cfg.ClientHeaderBufferSize }};
client_header_timeout {{ $cfg.ClientHeaderTimeout }}s;
large_client_header_buffers {{ $cfg.LargeClientHeaderBuffers }};
client_body_buffer_size {{ $cfg.ClientBodyBufferSize }};
client_body_timeout {{ $cfg.ClientBodyTimeout }}s;
http2_max_field_size {{ $cfg.HTTP2MaxFieldSize }};
http2_max_header_size {{ $cfg.HTTP2MaxHeaderSize }};
http2_max_requests {{ $cfg.HTTP2MaxRequests }};
http2_max_concurrent_streams {{ $cfg.HTTP2MaxConcurrentStreams }};
types_hash_max_size 2048;
server_names_hash_max_size {{ $cfg.ServerNameHashMaxSize }};
server_names_hash_bucket_size {{ $cfg.ServerNameHashBucketSize }};
map_hash_bucket_size {{ $cfg.MapHashBucketSize }};
proxy_headers_hash_max_size {{ $cfg.ProxyHeadersHashMaxSize }};
proxy_headers_hash_bucket_size {{ $cfg.ProxyHeadersHashBucketSize }};
variables_hash_bucket_size {{ $cfg.VariablesHashBucketSize }};
variables_hash_max_size {{ $cfg.VariablesHashMaxSize }};
underscores_in_headers {{ if $cfg.EnableUnderscoresInHeaders }}on{{ else }}off{{ end }};
ignore_invalid_headers {{ if $cfg.IgnoreInvalidHeaders }}on{{ else }}off{{ end }};
limit_req_status {{ $cfg.LimitReqStatusCode }};
limit_conn_status {{ $cfg.LimitConnStatusCode }};
{{ if $cfg.EnableOpentracing }}
opentracing on;
{{ end }}
{{ buildOpentracing $cfg }}
include /etc/nginx/mime.types;
default_type text/html;
{{ if $cfg.EnableBrotli }}
brotli on;
brotli_comp_level {{ $cfg.BrotliLevel }};
brotli_types {{ $cfg.BrotliTypes }};
{{ end }}
{{ if $cfg.UseGzip }}
gzip on;
gzip_comp_level {{ $cfg.GzipLevel }};
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types {{ $cfg.GzipTypes }};
gzip_proxied any;
gzip_vary on;
{{ end }}
# Custom headers for response
{{ range $k, $v := $addHeaders }}
more_set_headers {{ printf "%s: %s" $k $v | quote }};
{{ end }}
server_tokens {{ if $cfg.ShowServerTokens }}on{{ else }}off{{ end }};
{{ if not $cfg.ShowServerTokens }}
more_clear_headers Server;
{{ end }}
# disable warnings
uninitialized_variable_warn off;
# Additional available variables:
# $namespace
# $ingress_name
# $service_name
# $service_port
log_format upstreaminfo {{ if $cfg.LogFormatEscapeJSON }}escape=json {{ end }}'{{ $cfg.LogFormatUpstream }}';
{{/* map urls that should not appear in access.log */}}
{{/* http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log */}}
map $request_uri $loggable {
{{ range $reqUri := $cfg.SkipAccessLogURLs }}
{{ $reqUri }} 0;{{ end }}
default 1;
}
{{ if $cfg.DisableAccessLog }}
access_log off;
{{ else }}
{{ if $cfg.EnableSyslog }}
access_log syslog:server={{ $cfg.SyslogHost }}:{{ $cfg.SyslogPort }} upstreaminfo if=$loggable;
{{ else }}
access_log {{ $cfg.AccessLogPath }} upstreaminfo {{ $cfg.AccessLogParams }} if=$loggable;
{{ end }}
{{ end }}
{{ if $cfg.EnableSyslog }}
error_log syslog:server={{ $cfg.SyslogHost }}:{{ $cfg.SyslogPort }} {{ $cfg.ErrorLogLevel }};
{{ else }}
error_log {{ $cfg.ErrorLogPath }} {{ $cfg.ErrorLogLevel }};
{{ end }}
{{ buildResolvers $cfg.Resolver $cfg.DisableIpv6DNS }}
# See https://www.nginx.com/blog/websocket-nginx
map $http_upgrade $connection_upgrade {
default upgrade;
{{ if (gt $cfg.UpstreamKeepaliveConnections 0) }}
# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
'' '';
{{ else }}
'' close;
{{ end }}
}
# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
# If no such header is provided, it can provide a random value.
map $http_x_request_id $req_id {
default $http_x_request_id;
{{ if $cfg.GenerateRequestID }}
"" $request_id;
{{ end }}
}
{{ if and $cfg.UseForwardedHeaders $cfg.ComputeFullForwardedFor }}
# We can't use $proxy_add_x_forwarded_for because the realip module
# replaces the remote_addr too soon
map $http_x_forwarded_for $full_x_forwarded_for {
{{ if $all.Cfg.UseProxyProtocol }}
default "$http_x_forwarded_for, $proxy_protocol_addr";
'' "$proxy_protocol_addr";
{{ else }}
default "$http_x_forwarded_for, $realip_remote_addr";
'' "$realip_remote_addr";
{{ end}}
}
{{ end }}
# Create a variable that contains the literal $ character.
# This works because the geo module will not resolve variables.
geo $literal_dollar {
default "$";
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols {{ $cfg.SSLProtocols }};
ssl_early_data {{ if $cfg.SSLEarlyData }}on{{ else }}off{{ end }};
# turn on session caching to drastically improve performance
{{ if $cfg.SSLSessionCache }}
ssl_session_cache builtin:1000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
ssl_session_timeout {{ $cfg.SSLSessionTimeout }};
{{ end }}
# allow configuring ssl session tickets
ssl_session_tickets {{ if $cfg.SSLSessionTickets }}on{{ else }}off{{ end }};
{{ if not (empty $cfg.SSLSessionTicketKey ) }}
ssl_session_ticket_key /etc/nginx/tickets.key;
{{ end }}
# slightly reduce the time-to-first-byte
ssl_buffer_size {{ $cfg.SSLBufferSize }};
{{ if not (empty $cfg.SSLCiphers) }}
# allow configuring custom ssl ciphers
ssl_ciphers '{{ $cfg.SSLCiphers }}';
ssl_prefer_server_ciphers on;
{{ end }}
{{ if not (empty $cfg.SSLDHParam) }}
# allow custom DH file http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
ssl_dhparam {{ $cfg.SSLDHParam }};
{{ end }}
ssl_ecdh_curve {{ $cfg.SSLECDHCurve }};
# PEM sha: {{ $cfg.DefaultSSLCertificate.PemSHA }}
ssl_certificate {{ $cfg.DefaultSSLCertificate.PemFileName }};
ssl_certificate_key {{ $cfg.DefaultSSLCertificate.PemFileName }};
{{ if gt (len $cfg.CustomHTTPErrors) 0 }}
proxy_intercept_errors on;
{{ end }}
{{ range $errCode := $cfg.CustomHTTPErrors }}
error_page {{ $errCode }} = @custom_upstream-default-backend_{{ $errCode }};{{ end }}
proxy_ssl_session_reuse on;
{{ if $cfg.AllowBackendServerHeader }}
proxy_pass_header Server;
{{ end }}
{{ range $header := $cfg.HideHeaders }}proxy_hide_header {{ $header }};
{{ end }}
{{ if not (empty $cfg.HTTPSnippet) }}
# Custom code snippet configured in the configuration configmap
{{ $cfg.HTTPSnippet }}
{{ end }}
upstream upstream_balancer {
### Attention!!!
#
# We no longer create "upstream" section for every backend.
# Backends are handled dynamically using Lua. If you would like to debug
# and see what backends ingress-nginx has in its memory you can
# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
# inspect current backends.
#
###
server 0.0.0.1; # placeholder
balancer_by_lua_block {
balancer.balance()
}
{{ if (gt $cfg.UpstreamKeepaliveConnections 0) }}
keepalive {{ $cfg.UpstreamKeepaliveConnections }};
keepalive_timeout {{ $cfg.UpstreamKeepaliveTimeout }}s;
keepalive_requests {{ $cfg.UpstreamKeepaliveRequests }};
{{ end }}
}
{{ range $rl := (filterRateLimits $servers ) }}
# Ratelimit {{ $rl.Name }}
geo $remote_addr $whitelist_{{ $rl.ID }} {
default 0;
{{ range $ip := $rl.Whitelist }}
{{ $ip }} 1;{{ end }}
}
# Ratelimit {{ $rl.Name }}
map $whitelist_{{ $rl.ID }} $limit_{{ $rl.ID }} {
0 {{ $cfg.LimitConnZoneVariable }};
1 "";
}
{{ end }}
{{/* build all the required rate limit zones. Each annotation requires a dedicated zone */}}
{{/* 1MB -> 16 thousand 64-byte states or about 8 thousand 128-byte states */}}
{{ range $zone := (buildRateLimitZones $servers) }}
{{ $zone }}
{{ end }}
# Cache for internal auth checks
proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
# Global filters
{{ range $ip := $cfg.BlockCIDRs }}deny {{ trimSpace $ip }};
{{ end }}
{{ if gt (len $cfg.BlockUserAgents) 0 }}
map $http_user_agent $block_ua {
default 0;
{{ range $ua := $cfg.BlockUserAgents }}{{ trimSpace $ua }} 1;
{{ end }}
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
map $http_referer $block_ref {
default 0;
{{ range $ref := $cfg.BlockReferers }}{{ trimSpace $ref }} 1;
{{ end }}
}
{{ end }}
{{/* Build server redirects (from/to www) */}}
{{ range $redirect := .RedirectServers }}
## start server {{ $redirect.From }}
server {
server_name {{ $redirect.From }};
{{ buildHTTPListener $all $redirect.From }}
{{ buildHTTPSListener $all $redirect.From }}
ssl_certificate_by_lua_block {
certificate.call()
}
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
{{ if ne $all.ListenPorts.HTTPS 443 }}
{{ $redirect_port := (printf ":%v" $all.ListenPorts.HTTPS) }}
return {{ $all.Cfg.HTTPRedirectCode }} $scheme://{{ $redirect.To }}{{ $redirect_port }}$request_uri;
{{ else }}
return {{ $all.Cfg.HTTPRedirectCode }} $scheme://{{ $redirect.To }}$request_uri;
{{ end }}
}
## end server {{ $redirect.From }}
{{ end }}
{{ range $server := $servers }}
## start server {{ $server.Hostname }}
server {
server_name {{ $server.Hostname }} {{range $server.Aliases }}{{ . }} {{ end }};
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
{{ template "SERVER" serverConfig $all $server }}
{{ if not (empty $cfg.ServerSnippet) }}
# Custom code snippet configured in the configuration configmap
{{ $cfg.ServerSnippet }}
{{ end }}
{{ template "CUSTOM_ERRORS" (buildCustomErrorDeps "upstream-default-backend" $cfg.CustomHTTPErrors $all.EnableMetrics) }}
}
## end server {{ $server.Hostname }}
{{ end }}
# backend for when default-backend-service is not configured or it does not have endpoints
server {
listen {{ $all.ListenPorts.Default }} default_server {{ if $all.Cfg.ReusePort }}reuseport{{ end }} backlog={{ $all.BacklogSize }};
{{ if $IsIPV6Enabled }}listen [::]:{{ $all.ListenPorts.Default }} default_server {{ if $all.Cfg.ReusePort }}reuseport{{ end }} backlog={{ $all.BacklogSize }};{{ end }}
set $proxy_upstream_name "internal";
access_log off;
location / {
return 404;
}
}
# default server, used for NGINX healthcheck and access to nginx stats
server {
listen 127.0.0.1:{{ .StatusPort }};
set $proxy_upstream_name "internal";
keepalive_timeout 0;
gzip off;
access_log off;
{{ if $cfg.EnableOpentracing }}
opentracing off;
{{ end }}
location {{ $healthzURI }} {
return 200;
}
location /is-dynamic-lb-initialized {
content_by_lua_block {
local configuration = require("configuration")
local backend_data = configuration.get_backends_data()
if not backend_data then
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
return
end
ngx.say("OK")
ngx.exit(ngx.HTTP_OK)
}
}
location {{ .StatusPath }} {
stub_status on;
}
location /configuration {
client_max_body_size {{ luaConfigurationRequestBodySize $cfg }}m;
client_body_buffer_size {{ luaConfigurationRequestBodySize $cfg }}m;
proxy_buffering off;
content_by_lua_block {
configuration.call()
}
}
location / {
content_by_lua_block {
ngx.exit(ngx.HTTP_NOT_FOUND)
}
}
}
}
stream {
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";
lua_shared_dict tcp_udp_configuration_data 5M;
init_by_lua_block {
collectgarbage("collect")
-- init modules
local ok, res
ok, res = pcall(require, "configuration")
if not ok then
error("require failed: " .. tostring(res))
else
configuration = res
end
ok, res = pcall(require, "tcp_udp_configuration")
if not ok then
error("require failed: " .. tostring(res))
else
tcp_udp_configuration = res
end
ok, res = pcall(require, "tcp_udp_balancer")
if not ok then
error("require failed: " .. tostring(res))
else
tcp_udp_balancer = res
end
}
init_worker_by_lua_block {
tcp_udp_balancer.init_worker()
}
lua_add_variable $proxy_upstream_name;
log_format log_stream '{{ $cfg.LogFormatStream }}';
{{ if $cfg.DisableAccessLog }}
access_log off;
{{ else }}
access_log {{ $cfg.AccessLogPath }} log_stream {{ $cfg.AccessLogParams }};
{{ end }}
error_log {{ $cfg.ErrorLogPath }};
upstream upstream_balancer {
server 0.0.0.1:1234; # placeholder
balancer_by_lua_block {
tcp_udp_balancer.balance()
}
}
server {
listen 127.0.0.1:{{ .StreamPort }};
access_log off;
content_by_lua_block {
tcp_udp_configuration.call()
}
}
# TCP services
{{ range $tcpServer := .TCPBackends }}
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-{{ $tcpServer.Backend.Namespace }}-{{ $tcpServer.Backend.Name }}-{{ $tcpServer.Backend.Port }}";
}
{{ range $address := $all.Cfg.BindAddressIpv4 }}
listen {{ $address }}:{{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }};
{{ else }}
listen {{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }};
{{ end }}
{{ if $IsIPV6Enabled }}
{{ range $address := $all.Cfg.BindAddressIpv6 }}
listen {{ $address }}:{{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }};
{{ else }}
listen [::]:{{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }};
{{ end }}
{{ end }}
proxy_timeout {{ $cfg.ProxyStreamTimeout }};
proxy_pass upstream_balancer;
{{ if $tcpServer.Backend.ProxyProtocol.Encode }}
proxy_protocol on;
{{ end }}
}
{{ end }}
# UDP services
{{ range $udpServer := .UDPBackends }}
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="udp-{{ $udpServer.Backend.Namespace }}-{{ $udpServer.Backend.Name }}-{{ $udpServer.Backend.Port }}";
}
{{ range $address := $all.Cfg.BindAddressIpv4 }}
listen {{ $address }}:{{ $udpServer.Port }} udp;
{{ else }}
listen {{ $udpServer.Port }} udp;
{{ end }}
{{ if $IsIPV6Enabled }}
{{ range $address := $all.Cfg.BindAddressIpv6 }}
listen {{ $address }}:{{ $udpServer.Port }} udp;
{{ else }}
listen [::]:{{ $udpServer.Port }} udp;
{{ end }}
{{ end }}
proxy_responses {{ $cfg.ProxyStreamResponses }};
proxy_timeout {{ $cfg.ProxyStreamTimeout }};
proxy_pass upstream_balancer;
}
{{ end }}
}
{{/* definition of templates to avoid repetitions */}}
{{ define "CUSTOM_ERRORS" }}
{{ $enableMetrics := .EnableMetrics }}
{{ $upstreamName := .UpstreamName }}
{{ range $errCode := .ErrorCodes }}
location @custom_{{ $upstreamName }}_{{ $errCode }} {
internal;
proxy_intercept_errors off;
proxy_set_header X-Code {{ $errCode }};
proxy_set_header X-Format $http_accept;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Namespace $namespace;
proxy_set_header X-Ingress-Name $ingress_name;
proxy_set_header X-Service-Name $service_name;
proxy_set_header X-Service-Port $service_port;
proxy_set_header X-Request-ID $req_id;
proxy_set_header Host $best_http_host;
set $proxy_upstream_name {{ $upstreamName | quote }};
rewrite (.*) / break;
proxy_pass http://upstream_balancer;
log_by_lua_block {
{{ if $enableMetrics }}
monitor.call()
{{ end }}
}
}
{{ end }}
{{ end }}
{{/* CORS support from https://michielkalkman.com/snippets/nginx-cors-open-configuration.html */}}
{{ define "CORS" }}
{{ $cors := .CorsConfig }}
# Cors Preflight methods needs additional options and different Return Code
if ($request_method = 'OPTIONS') {
more_set_headers 'Access-Control-Allow-Origin: {{ $cors.CorsAllowOrigin }}';
{{ if $cors.CorsAllowCredentials }} more_set_headers 'Access-Control-Allow-Credentials: {{ $cors.CorsAllowCredentials }}'; {{ end }}
more_set_headers 'Access-Control-Allow-Methods: {{ $cors.CorsAllowMethods }}';
more_set_headers 'Access-Control-Allow-Headers: {{ $cors.CorsAllowHeaders }}';
more_set_headers 'Access-Control-Max-Age: {{ $cors.CorsMaxAge }}';
more_set_headers 'Content-Type: text/plain charset=UTF-8';
more_set_headers 'Content-Length: 0';
return 204;
}
more_set_headers 'Access-Control-Allow-Origin: {{ $cors.CorsAllowOrigin }}';
{{ if $cors.CorsAllowCredentials }} more_set_headers 'Access-Control-Allow-Credentials: {{ $cors.CorsAllowCredentials }}'; {{ end }}
more_set_headers 'Access-Control-Allow-Methods: {{ $cors.CorsAllowMethods }}';
more_set_headers 'Access-Control-Allow-Headers: {{ $cors.CorsAllowHeaders }}';
{{ end }}
{{/* definition of server-template to avoid repetitions with server-alias */}}
{{ define "SERVER" }}
{{ $all := .First }}
{{ $server := .Second }}
{{ buildHTTPListener $all $server.Hostname }}
{{ buildHTTPSListener $all $server.Hostname }}
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
{{ if eq $server.Hostname "_" }}
return 444;
{{ end }}
{{ if not (empty $server.AuthTLSError) }}
# {{ $server.AuthTLSError }}
return 403;
{{ else }}
{{ if not (empty $server.CertificateAuth.CAFileName) }}
# PEM sha: {{ $server.CertificateAuth.CASHA }}
ssl_client_certificate {{ $server.CertificateAuth.CAFileName }};
ssl_verify_client {{ $server.CertificateAuth.VerifyClient }};
ssl_verify_depth {{ $server.CertificateAuth.ValidationDepth }};
{{ if not (empty $server.CertificateAuth.CRLFileName) }}
# PEM sha: {{ $server.CertificateAuth.CRLSHA }}
ssl_crl {{ $server.CertificateAuth.CRLFileName }};
{{ end }}
{{ if not (empty $server.CertificateAuth.ErrorPage)}}
error_page 495 496 = {{ $server.CertificateAuth.ErrorPage }};
{{ end }}
{{ end }}
{{ if not (empty $server.ProxySSL.CAFileName) }}
# PEM sha: {{ $server.ProxySSL.CASHA }}
proxy_ssl_trusted_certificate {{ $server.ProxySSL.CAFileName }};
proxy_ssl_ciphers {{ $server.ProxySSL.Ciphers }};
proxy_ssl_protocols {{ $server.ProxySSL.Protocols }};
proxy_ssl_verify {{ $server.ProxySSL.Verify }};
proxy_ssl_verify_depth {{ $server.ProxySSL.VerifyDepth }};
{{ end }}
{{ if not (empty $server.ProxySSL.PemFileName) }}
proxy_ssl_certificate {{ $server.ProxySSL.PemFileName }};
proxy_ssl_certificate_key {{ $server.ProxySSL.PemFileName }};
{{ end }}
{{ if not (empty $server.SSLCiphers) }}
ssl_ciphers {{ $server.SSLCiphers }};
{{ end }}
{{ if not (empty $server.ServerSnippet) }}
{{ $server.ServerSnippet }}
{{ end }}
{{ range $errorLocation := (buildCustomErrorLocationsPerServer $server) }}
{{ template "CUSTOM_ERRORS" (buildCustomErrorDeps $errorLocation.UpstreamName $errorLocation.Codes $all.EnableMetrics) }}
{{ end }}
{{ $enforceRegex := enforceRegexModifier $server.Locations }}
{{ range $location := $server.Locations }}
{{ $path := buildLocation $location $enforceRegex }}
{{ $proxySetHeader := proxySetHeader $location }}
{{ $authPath := buildAuthLocation $location $all.Cfg.GlobalExternalAuth.URL }}
{{ $applyGlobalAuth := shouldApplyGlobalAuth $location $all.Cfg.GlobalExternalAuth.URL }}
{{ $externalAuth := $location.ExternalAuth }}
{{ if eq $applyGlobalAuth true }}
{{ $externalAuth = $all.Cfg.GlobalExternalAuth }}
{{ end }}
{{ if not (empty $location.Rewrite.AppRoot)}}
if ($uri = /) {
return 302 {{ $location.Rewrite.AppRoot }};
}
{{ end }}
{{ if $authPath }}
location = {{ $authPath }} {
internal;
{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
proxy_cache auth_cache;
{{- range $dur := $externalAuth.AuthCacheDuration }}
proxy_cache_valid {{ $dur }};
{{- end }}
proxy_cache_key "$cache_key";
{{ end }}
# ngx_auth_request module overrides variables in the parent request,
# therefore we have to explicitly set this variable again so that when the parent request
# resumes it has the correct value set for this variable so that Lua can pick backend correctly
set $proxy_upstream_name {{ buildUpstreamName $location | quote }};
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Forwarded-Proto "";
{{ if $externalAuth.Method }}
proxy_method {{ $externalAuth.Method }};
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
{{ end }}
proxy_set_header Host {{ $externalAuth.Host }};
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Original-Method $request_method;
proxy_set_header X-Sent-From "nginx-ingress-controller";
proxy_set_header X-Real-IP $remote_addr;
{{ if and $all.Cfg.UseForwardedHeaders $all.Cfg.ComputeFullForwardedFor }}
proxy_set_header X-Forwarded-For $full_x_forwarded_for;
{{ else }}
proxy_set_header X-Forwarded-For $remote_addr;
{{ end }}
{{ if $externalAuth.RequestRedirect }}
proxy_set_header X-Auth-Request-Redirect {{ $externalAuth.RequestRedirect }};
{{ else }}
proxy_set_header X-Auth-Request-Redirect $request_uri;
{{ end }}
{{ if $externalAuth.AuthCacheKey }}
proxy_buffering "on";
{{ else }}
proxy_buffering {{ $location.Proxy.ProxyBuffering }};
{{ end }}
proxy_buffer_size {{ $location.Proxy.BufferSize }};
proxy_buffers {{ $location.Proxy.BuffersNumber }} {{ $location.Proxy.BufferSize }};
proxy_request_buffering {{ $location.Proxy.RequestBuffering }};
proxy_http_version {{ $location.Proxy.ProxyHTTPVersion }};
proxy_ssl_server_name on;
proxy_pass_request_headers on;
{{ if isValidByteSize $location.Proxy.BodySize true }}
client_max_body_size {{ $location.Proxy.BodySize }};
{{ end }}
{{ if isValidByteSize $location.ClientBodyBufferSize false }}
client_body_buffer_size {{ $location.ClientBodyBufferSize }};
{{ end }}
# Pass the extracted client certificate to the auth provider
{{ if not (empty $server.CertificateAuth.CAFileName) }}
{{ if $server.CertificateAuth.PassCertToUpstream }}
proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
{{ end }}
proxy_set_header ssl-client-verify $ssl_client_verify;
proxy_set_header ssl-client-subject-dn $ssl_client_s_dn;
proxy_set_header ssl-client-issuer-dn $ssl_client_i_dn;
{{ end }}
{{- range $line := buildAuthProxySetHeaders $externalAuth.ProxySetHeaders}}
{{ $line }}
{{- end }}
{{ if not (empty $externalAuth.AuthSnippet) }}
{{ $externalAuth.AuthSnippet }}
{{ end }}
set $target {{ $externalAuth.URL }};
proxy_pass $target;
}
{{ end }}
{{ if $externalAuth.SigninURL }}
location {{ buildAuthSignURLLocation $location.Path $externalAuth.SigninURL }} {
internal;
return 302 {{ buildAuthSignURL $externalAuth.SigninURL }};
}
{{ end }}
location {{ $path }} {
{{ $ing := (getIngressInformation $location.Ingress $server.Hostname $location.Path) }}
set $namespace {{ $ing.Namespace | quote}};
set $ingress_name {{ $ing.Rule | quote }};
set $service_name {{ $ing.Service | quote }};
set $service_port {{ $ing.ServicePort | quote }};
set $location_path {{ $location.Path | escapeLiteralDollar | quote }};
{{ if $all.Cfg.EnableOpentracing }}
{{ if and $location.Opentracing.Set (not $location.Opentracing.Enabled) }}
opentracing off;
{{ else }}
{{ opentracingPropagateContext $location }};
{{ end }}
{{ else }}
{{ if and $location.Opentracing.Set $location.Opentracing.Enabled }}
opentracing on;
{{ opentracingPropagateContext $location }};
{{ end }}
{{ end }}
{{ if $location.Mirror.URI }}
mirror {{ $location.Mirror.URI }};
mirror_request_body {{ $location.Mirror.RequestBody }};
{{ end }}
rewrite_by_lua_block {
lua_ingress.rewrite({{ locationConfigForLua $location $all }})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
{{ if $all.EnableMetrics }}
monitor.call()
{{ end }}
plugins.run()
}
{{ if not $location.Logs.Access }}
access_log off;
{{ end }}
{{ if $location.Logs.Rewrite }}
rewrite_log on;
{{ end }}
{{ if $location.HTTP2PushPreload }}
http2_push_preload on;
{{ end }}
port_in_redirect {{ if $location.UsePortInRedirects }}on{{ else }}off{{ end }};
set $balancer_ewma_score -1;
set $proxy_upstream_name {{ buildUpstreamName $location | quote }};
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
{{ if (or $location.ModSecurity.Enable $all.Cfg.EnableModsecurity) }}
{{ if not $all.Cfg.EnableModsecurity }}
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf;
{{ end }}
{{ if $location.ModSecurity.Snippet }}
modsecurity_rules '
{{ $location.ModSecurity.Snippet }}
';
{{ else if (and (not $all.Cfg.EnableOWASPCoreRules) ($location.ModSecurity.OWASPRules))}}
modsecurity_rules_file /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf;
{{ end }}
{{ if (not (empty $location.ModSecurity.TransactionID)) }}
modsecurity_transaction_id {{ $location.ModSecurity.TransactionID | quote }};
{{ end }}
{{ end }}
{{ if isLocationAllowed $location }}
{{ if gt (len $location.Whitelist.CIDR) 0 }}
{{ range $ip := $location.Whitelist.CIDR }}
allow {{ $ip }};{{ end }}
deny all;
{{ end }}
{{ if not (isLocationInLocationList $location $all.Cfg.NoAuthLocations) }}
{{ if $authPath }}
# this location requires authentication
auth_request {{ $authPath }};
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
{{- range $line := buildAuthResponseHeaders $externalAuth.ResponseHeaders }}
{{ $line }}
{{- end }}
{{ end }}
{{ if $externalAuth.SigninURL }}
set_escape_uri $escaped_request_uri $request_uri;
error_page 401 = {{ buildAuthSignURLLocation $location.Path $externalAuth.SigninURL }};
{{ end }}
{{ if $location.BasicDigestAuth.Secured }}
{{ if eq $location.BasicDigestAuth.Type "basic" }}
auth_basic {{ $location.BasicDigestAuth.Realm | quote }};
auth_basic_user_file {{ $location.BasicDigestAuth.File }};
{{ else }}
auth_digest {{ $location.BasicDigestAuth.Realm | quote }};
auth_digest_user_file {{ $location.BasicDigestAuth.File }};
{{ end }}
proxy_set_header Authorization "";
{{ end }}
{{ end }}
{{/* if the location contains a rate limit annotation, create one */}}
{{ $limits := buildRateLimit $location }}
{{ range $limit := $limits }}
{{ $limit }}{{ end }}
{{ if $location.CorsConfig.CorsEnabled }}
{{ template "CORS" $location }}
{{ end }}
{{ buildInfluxDB $location.InfluxDB }}
{{ if not (empty $location.Redirect.URL) }}
if ($uri ~* {{ stripLocationModifer $path }}) {
return {{ $location.Redirect.Code }} {{ $location.Redirect.URL }};
}
{{ end }}
{{ if isValidByteSize $location.Proxy.BodySize true }}
client_max_body_size {{ $location.Proxy.BodySize }};
{{ end }}
{{ if isValidByteSize $location.ClientBodyBufferSize false }}
client_body_buffer_size {{ $location.ClientBodyBufferSize }};
{{ end }}
{{/* By default use vhost as Host to upstream, but allow overrides */}}
{{ if not (eq $proxySetHeader "grpc_set_header") }}
{{ if not (empty $location.UpstreamVhost) }}
{{ $proxySetHeader }} Host {{ $location.UpstreamVhost | quote }};
{{ else }}
{{ $proxySetHeader }} Host $best_http_host;
{{ end }}
{{ end }}
# Pass the extracted client certificate to the backend
{{ if not (empty $server.CertificateAuth.CAFileName) }}
{{ if $server.CertificateAuth.PassCertToUpstream }}
{{ $proxySetHeader }} ssl-client-cert $ssl_client_escaped_cert;
{{ end }}
{{ $proxySetHeader }} ssl-client-verify $ssl_client_verify;
{{ $proxySetHeader }} ssl-client-subject-dn $ssl_client_s_dn;
{{ $proxySetHeader }} ssl-client-issuer-dn $ssl_client_i_dn;
{{ end }}
# Allow websocket connections
{{ $proxySetHeader }} Upgrade $http_upgrade;
{{ if $location.Connection.Enabled}}
{{ $proxySetHeader }} Connection {{ $location.Connection.Header }};
{{ else }}
{{ $proxySetHeader }} Connection $connection_upgrade;
{{ end }}
{{ $proxySetHeader }} X-Request-ID $req_id;
{{ $proxySetHeader }} X-Real-IP $remote_addr;
{{ if and $all.Cfg.UseForwardedHeaders $all.Cfg.ComputeFullForwardedFor }}
{{ $proxySetHeader }} X-Forwarded-For $full_x_forwarded_for;
{{ else }}
{{ $proxySetHeader }} X-Forwarded-For $remote_addr;
{{ end }}
{{ $proxySetHeader }} X-Forwarded-Host $best_http_host;
{{ $proxySetHeader }} X-Forwarded-Port $pass_port;
{{ $proxySetHeader }} X-Forwarded-Proto $pass_access_scheme;
{{ if $all.Cfg.ProxyAddOriginalURIHeader }}
{{ $proxySetHeader }} X-Original-URI $request_uri;
{{ end }}
{{ $proxySetHeader }} X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
{{ $proxySetHeader }} X-Original-Forwarded-For {{ buildForwardedFor $all.Cfg.ForwardedForHeader }};
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
{{ $proxySetHeader }} Proxy "";
# Custom headers to proxied server
{{ range $k, $v := $all.ProxySetHeaders }}
{{ $proxySetHeader }} {{ $k }} {{ $v | quote }};
{{ end }}
proxy_connect_timeout {{ $location.Proxy.ConnectTimeout }}s;
proxy_send_timeout {{ $location.Proxy.SendTimeout }}s;
proxy_read_timeout {{ $location.Proxy.ReadTimeout }}s;
proxy_buffering {{ $location.Proxy.ProxyBuffering }};
proxy_buffer_size {{ $location.Proxy.BufferSize }};
proxy_buffers {{ $location.Proxy.BuffersNumber }} {{ $location.Proxy.BufferSize }};
{{ if isValidByteSize $location.Proxy.ProxyMaxTempFileSize true }}
proxy_max_temp_file_size {{ $location.Proxy.ProxyMaxTempFileSize }};
{{ end }}
proxy_request_buffering {{ $location.Proxy.RequestBuffering }};
proxy_http_version {{ $location.Proxy.ProxyHTTPVersion }};
proxy_cookie_domain {{ $location.Proxy.CookieDomain }};
proxy_cookie_path {{ $location.Proxy.CookiePath }};
# In case of errors try the next upstream server before returning an error
proxy_next_upstream {{ buildNextUpstream $location.Proxy.NextUpstream $all.Cfg.RetryNonIdempotent }};
proxy_next_upstream_timeout {{ $location.Proxy.NextUpstreamTimeout }};
proxy_next_upstream_tries {{ $location.Proxy.NextUpstreamTries }};
{{/* Add any additional configuration defined */}}
{{ $location.ConfigurationSnippet }}
{{ if not (empty $all.Cfg.LocationSnippet) }}
# Custom code snippet configured in the configuration configmap
{{ $all.Cfg.LocationSnippet }}
{{ end }}
{{/* if we are sending the request to a custom default backend, we add the required headers */}}
{{ if (hasPrefix $location.Backend "custom-default-backend-") }}
proxy_set_header X-Code 503;
proxy_set_header X-Format $http_accept;
proxy_set_header X-Namespace $namespace;
proxy_set_header X-Ingress-Name $ingress_name;
proxy_set_header X-Service-Name $service_name;
proxy_set_header X-Service-Port $service_port;
proxy_set_header X-Request-ID $req_id;
{{ end }}
{{ if $location.Satisfy }}
satisfy {{ $location.Satisfy }};
{{ end }}
{{/* if a location-specific error override is set, add the proxy_intercept here */}}
{{ if $location.CustomHTTPErrors }}
# Custom error pages per ingress
proxy_intercept_errors on;
{{ end }}
{{ range $errCode := $location.CustomHTTPErrors }}
error_page {{ $errCode }} = @custom_{{ $location.DefaultBackendUpstreamName }}_{{ $errCode }};{{ end }}
{{ if (eq $location.BackendProtocol "FCGI") }}
include /etc/nginx/fastcgi_params;
{{ end }}
{{- if $location.FastCGI.Index -}}
fastcgi_index {{ $location.FastCGI.Index | quote }};
{{- end -}}
{{ range $k, $v := $location.FastCGI.Params }}
fastcgi_param {{ $k }} {{ $v | quote }};
{{ end }}
{{ buildProxyPass $server.Hostname $all.Backends $location }}
{{ if (or (eq $location.Proxy.ProxyRedirectFrom "default") (eq $location.Proxy.ProxyRedirectFrom "off")) }}
proxy_redirect {{ $location.Proxy.ProxyRedirectFrom }};
{{ else if not (eq $location.Proxy.ProxyRedirectTo "off") }}
proxy_redirect {{ $location.Proxy.ProxyRedirectFrom }} {{ $location.Proxy.ProxyRedirectTo }};
{{ end }}
{{ else }}
# Location denied. Reason: {{ $location.Denied | quote }}
return 503;
{{ end }}
{{ if not (empty $location.ProxySSL.CAFileName) }}
# PEM sha: {{ $location.ProxySSL.CASHA }}
proxy_ssl_trusted_certificate {{ $location.ProxySSL.CAFileName }};
proxy_ssl_ciphers {{ $location.ProxySSL.Ciphers }};
proxy_ssl_protocols {{ $location.ProxySSL.Protocols }};
proxy_ssl_verify {{ $location.ProxySSL.Verify }};
proxy_ssl_verify_depth {{ $location.ProxySSL.VerifyDepth }};
{{ end }}
{{ if not (empty $location.ProxySSL.PemFileName) }}
proxy_ssl_certificate {{ $location.ProxySSL.PemFileName }};
proxy_ssl_certificate_key {{ $location.ProxySSL.PemFileName }};
{{ end }}
}
{{ end }}
{{ end }}
{{ if eq $server.Hostname "_" }}
# health checks in cloud providers require the use of port {{ $all.ListenPorts.HTTP }}
location {{ $all.HealthzURI }} {
{{ if $all.Cfg.EnableOpentracing }}
opentracing off;
{{ end }}
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
{{ if $all.Cfg.EnableOpentracing }}
opentracing off;
{{ end }}
{{ range $v := $all.NginxStatusIpv4Whitelist }}
allow {{ $v }};
{{ end }}
{{ if $all.IsIPV6Enabled -}}
{{ range $v := $all.NginxStatusIpv6Whitelist }}
allow {{ $v }};
{{ end }}
{{ end -}}
deny all;
access_log off;
stub_status on;
}
{{ end }}
{{ end }}`
Relevant bit:
{{ define "SERVER" }}
{{ $all := .First }}
{{ $server := .Second }}
{{ buildHTTPListener $all $server.Hostname }}
{{ buildHTTPSListener $all $server.Hostname }}
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
{{ if eq $server.Hostname "_" }}
return 444;
{{ end }}
I get connection closed 444 on the HTTP endpoint fine, but the HTTPS gives me back an acme certificate and not a 444 connection closed. It could possibly be giving it to me, but the cert is invalid so never resolves to 444. Unless I am doing something wrong with my template I am not seeing a simple solution to this likely common problem sadly and that makes me a sad panda. :'(
If you return anything it will come back with a cert and you'll be failed by PCI. No reponse or an otherwise valid cert is your only option. They will accept any cert that is time valid and issued by a global CA. (including let's encrypt)
@sharkymcdongles the SSL part is done in lua https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/lua/certificate.lua#L226
you need to change this part
if not hostname then
ngx.log(ngx.INFO, "obtained hostname is nil (the client does "
.. "not support SNI?), falling back to default certificate")
ngx.exit(444)
end
Is this even possible to target from the custom go templating without building my own image/binaries?
Because if I have to build it on my own that seems a bit much for such a simple change. I can do that sure but then I need to maintain this for every update etc.
Nevermind I can just create another cm and mount it there. Silly question.
Tried your solution @aledbf and get:
I0723 21:09:04.965269 7 controller.go:153] Backend successfully reloaded.
I0723 21:09:04.965306 7 controller.go:162] Initial sync, sleeping for 1 second.
2020/07/23 21:09:04 [error] 42#42: init_by_lua error: init_by_lua:9: require failed: module 'lua_ingress' not found:
no field package.preload['lua_ingress']
no file '/etc/nginx/lua/lua_ingress.lua'
no file '../lua-resty-core/lib/lua_ingress.lua'
no file '../lua-resty-lrucache/lib/lua_ingress.lua'
no file '/usr/local/share/luajit-2.1.0-beta3/lua_ingress.lua'
no file '/usr/local/share/lua/5.1/lua_ingress.lua'
no file '/usr/local/lib/lua/lua_ingress.lua'
no file './lua_ingress.lua'
no file '/usr/local/share/luajit-2.1.0-beta3/lua_ingress.lua'
no file '/usr/local/share/lua/5.1/lua_ingress.lua'
no file '/usr/local/share/lua/5.1/lua_ingress/init.lua'
no file '/usr/local/lib/lua/lua_ingress/lua_ingress.so'
no file '/usr/local/lib/lua/lua_ingress.so'
no file './lua_ingress.so'
no file '/usr/local/lib/lua/5.1/lua_ingress.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
init_by_lua:9: in main chunk
@aledbf just to be clear I did exactly as you pasted. I created a configmap from the lua file you linked after I edited it to have your change:
k create cm certificate-lua --from-file=/tmp/certificate.lua
I then edited the Nginx values to have extra mounts like for the nginx.conf template:
extraVolumeMounts:
- mountPath: /etc/nginx/template
name: nginx-template-volume
readOnly: true
- mountPath: /etc/nginx/lua/
name: certificate-lua-volume
readOnly: true
extraVolumes:
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: nginx.tmpl
path: nginx.tmpl
- name: certificate-lua-volume
configMap:
name: certificate-lua
items:
- key: certificate.lua
path: certificate.lua
Please let me know what I am doing stupid 😄
@sharkymcdongles you need to mount the file in the directory, like
extraVolumeMounts:
- mountPath: /etc/nginx/lua/certificate.lua
name: certificate-lua-volume
subPath: certificate.lua
extraVolumes:
- name: certificate-lua-volume
configMap:
name: certificate-lua
items:
- key: certificate.lua
path: certificate.lua
otherwise, the /etc/nginx/lua/
content is replaced and only certificate.lua
is there
@aledbf Thank you! I knew I was doing something stupid. 😅
Somehow though with the lua change for the SSL it now has made it to where the return 444 for the nonSSL takes me to the first server block after the default host for some reason.
What I have for the default after template generation:
## start server _
server {
server_name _ ;
return 444;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "";
set $location_path "/";
opentracing_propagate_context;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = false,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
Here is a slightly bigger paste:
# Global filters
## start server _
server {
server_name _ ;
return 444;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "";
set $location_path "/";
opentracing_propagate_context;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = false,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
access_log off;
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "upstream-default-backend";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
# health checks in cloud providers require the use of port 80
location /healthz {
opentracing off;
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
opentracing off;
allow 127.0.0.1;
deny all;
access_log off;
stub_status on;
}
}
## end server _
## start server alertmanager.censored.com
server {
server_name alertmanager.censored.com ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "monitoring";
set $ingress_name "prometheus-operator-alertmanager";
set $service_name "prometheus-operator-alertmanager";
set $service_port "9093";
set $location_path "/";
opentracing_propagate_context;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
Not quite sure where it is going wrong or why it is taking me to alertmanager now when I go to just the IP without https. Behavior was 444 before without issues.
@sharkymcdongles use the next patch to change the template. Only this change is required (remove the lua mount)
diff --git a/rootfs/etc/nginx/template/nginx.tmpl b/rootfs/etc/nginx/template/nginx.tmpl
index 0b6240d1d..a2952b766 100755
--- a/rootfs/etc/nginx/template/nginx.tmpl
+++ b/rootfs/etc/nginx/template/nginx.tmpl
@@ -576,6 +576,15 @@ http {
}
{{ end }}
+ {{ if eq $server.Hostname "_" }}
+ {{ buildHTTPListener $all $server.Hostname }}
+ {{ buildHTTPSListener $all $server.Hostname }}
+
+ set $proxy_upstream_name "-";
+ set $proxy_alternative_upstream_name "";
+
+ return 444;
+ {{ else }}
{{ template "SERVER" serverConfig $all $server }}
{{ if not (empty $cfg.ServerSnippet) }}
@@ -584,6 +593,7 @@ http {
{{ end }}
{{ template "CUSTOM_ERRORS" (buildCustomErrorDeps "upstream-default-backend" $cfg.CustomHTTPErrors $all.EnableMetrics) }}
+ {{ end }}
}
## end server {{ $server.Hostname }}
@aledbf thanks now it works! I realize now that I had accidentally begun adding a second if clause to try to solve the issue with the SSL not returning and forgot to revert. After removing it as you suggested it works fine.
Amazing stuff! Thanks so much! I do agree with @JohnGalt1717 that adding this as a oneliner configmap toggle is probably the best option, but it is way less urgent since I have a way around it even if it means I need to add the creation of configmaps outside of the helm chart which is a bad practice IMO. Maybe a nice compromise would be allowing an extra configmaps value or something where you can pass the templates or settings files as base64 strings that will then be unencoded and placed as a CM then added as a extra volume.
Maybe a nice compromise would be allowing an extra configmaps value or something where you can pass the templates or settings files as base64 strings that will then be unencoded and placed as a CM then added as a extra volume.
This is documented https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/ Basically what you did. A more automated thing could lead to unexpected behavior.
from the page: Please note the template is tied to the Go code. Do not change names in the variable $cfg.
There is no way to bypass that with the current approach of using a go template. You need to test the upgrade before any change in a production environment.
You are correct, and Iagree it should thoroughly be tested first. That being said once I do test and have it working for sure, I have no way to roll the settings out without manually adding configmaps that are then referenced by the values yaml for extravolumes and extramounts. A helm template should he able to template anything it might need or depend on in my opinion as the goal is a single unit of deployment. After this work around I now have 2 things I must manage and deploy before I can deploy the Ingress Controller or I end up in a bad situation.
It isn't a huge thing, but it would streamline things imo. I am happy to submit an MR too for this if it is likely to get accepted.
A helm template should he able to template anything it might need or depend on in my opinion as the goal is a single unit of deployment.
Right. The problem here is the go template comes from the docker image. Not sure is there an easy way to copy the template from the image.
I don't want to copy it from image and have it in chart. It can just be documented that people can create cms from files in rootfs dir in github to overwrite and change behaviour (with a bunch of warnings naturally). Then they can supply the templates base64ed and then the template will generate a configmap with a key that can be reffed and mounted in the correct place. Base64 just because putting such big files with tons of newlines and what have you in a values file is a recipe for pain lol.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Hello! Any updates on this issue? It doesn't seem to be completely resolved, because even though there is an option to mount a custom ConfigMap with the nginx conf template, this is an operational nightmare. You have to always check if your template is up to date and instead of using the default helm chart and set some values, you will also have to create configmaps with your template. If this behaviour depends from nginx-ingress user to user, shouldn't it be an option to configure it with some runtime flags, thus improving the overall experience of operating the ingress controller? I have to agree with @JohnGalt1717 that this issue (serving a self signed certificate) is noted by security audits. Thanks!
cc: @aledbf
This is indeed a security issue, and I can't see any reason why not allow us to disable default rule to get rid of direct IP responses.
Same issues with assessing security of installation with this ingress controller.
Imho we should have an easy way to just disable connection to the ip by using some config map entry.
I hope I'm getting this discussion correctly, but I did manage to force Nginx to close connections using a pretty simple Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: default-host-sinkhole-ingress
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
return 444;
spec:
ingressClassName: nginx
defaultBackend:
service:
name: dummy-resource
port:
number: 80
Notes:
dummy-resource
service is dummy - it doesn't really exist.Hope this helps out someone...
@yoshigev Thanks for this. It's definitely better, but this will not pass PCI. If they get a cert handshake that isn't a valid 3rd party trusted signed cert, they will fail you. It doesn't matter if it doesn't complete a connection.
@yoshigev Thanks for this. It's definitely better, but this will not pass PCI. If they get a cert handshake that isn't a valid 3rd party trusted signed cert, they will fail you. It doesn't matter if it doesn't complete a connection.
@JohnGalt1717 Actually, it will if you enable this new flag in addition to the sinkhole ingress: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-reject-handshake
Not an ideal solution, but it works.
We ended up putting haproxy in front of ingress controller. It verifies the sni and drops the connection if req.ssl_sni
is not right
We have a difficult PCI compliance issue that they are rejecting the self-signed cert because of the default backend.
We've tried everything we can think of to ensure that a direct IP address won't go to the default backend without fail.
We have tried:
nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'xxx.xxx.xxx.xxx') { rewrite ^ https://domain.com$request_uri permanent; } (and '' as well)
We have nginx.ingress.kubernetes.io/default-backend: api-svc to try and force it to use our deployed pod to respond. It ignores this even though it's clearly set in the describe: Default backend: api-svc:80 (10.240.0.18:1562,10.240.0.19:1562)
Nothing seems to intercept this. No matter what we do, this ends up going to the default backend. We need to do a 301 perminent redirect to one of our urls.
How does one get this working so that IP doesn't result in the default backend kicking in and ignoring all possible configuration? Alternatively how does one override the default backend settings to just do a redirect?