kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.43k stars 8.24k forks source link

HTTPS for static content poor performances #5502

Closed vDMG closed 4 years ago

vDMG commented 4 years ago

/triage support

Hello all, I’m facing a performance issue with my NGINX Ingress Controller configuration. I’m trying to serve static content from my web application (javascript file) of 1 MB. I’m benching my endpoint(in US) using vegeta (in EU) (https://github.com/tsenart/vegeta) I have two scenarios :

I've change all endpoints or resources with toto to make thing clearer Bench result bench-parms:

GET https://toto.example.com/app/dist/vendor.bundle.js?1636e40125c6bad14763
Content-Type: text/javascript
Accept-Encoding: gzip, deflate, br

scenario A :

vegeta attack -targets=bench-parms -format=http -duration=10s -rate=0 -max-workers=20 -output=report-ing.bin
Requests      [total, rate, throughput]         116, 11.47, 9.54
Duration      [total, attack, wait]             12.161s, 10.112s, 2.049s
Latencies     [min, mean, 50, 90, 95, 99, max]  543.598ms, 1.844s, 1.679s, 2.981s, 4.228s, 4.807s, 5.264s
Bytes In      [total, mean]                     436547092, 3763337.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:116
Error Set:

scenario B :

vegeta attack -targets=bench-parms -format=http -duration=10s -rate=0 -max-workers=20 -output=report-ing.bin
Requests      [total, rate, throughput]         30, 2.70, 1.68
Duration      [total, attack, wait]             17.824s, 11.098s, 6.727s
Latencies     [min, mean, 50, 90, 95, 99, max]  1.175s, 10.108s, 10.821s, 17.059s, 17.616s, 17.824s, 17.824s
Bytes In      [total, mean]                     112900110, 3763337.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:30
Error Set:

scenario C:

vegeta attack -targets=bench-parms -format=http -duration=10s -rate=0 -max-workers=1 -output=report-ing.bin`
Requests      [total, rate, throughput]         17, 1.70, 1.61
Duration      [total, attack, wait]             10.552s, 10.014s, 538.299ms
Latencies     [min, mean, 50, 90, 95, 99, max]  485.656ms, 620.709ms, 544.407ms, 739.821ms, 1.417s, 1.771s, 1.771s
Bytes In      [total, mean]                     63976729, 3763337.00
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:17
Error Set:

As you can see over HTTP I can handle 11 RQPS but if I enable HTTPS 2 RQPS. What could explain such a difference ? And i see this performance issue with concurrency > 2 There is almost no load on CPU or RAM. I’ve tried to enable caching on NGINX side but 0 improvement On the same endpoint i havee other request with smaller content and RQPS are better I can give you any configuration you need if it can help explain this situations :slightly_smiling_face: Thank you :smile: Here is all my configuration. I can give you more details if needed.

Installation : official helm chart with deployment https://github.com/helm/charts/tree/b8d8c8d3cd7c6aab840d36d62eb9b9bee24e3473 HTTPS : using certmanager with lets encrypt NGINX image : nginx-ingress-controller:0.27.1 (or 0.32.0) OS : ContainerOptimizedOS with standard GCE disk Kubernetes : GKE VCPU : 8 RAM : 4GB

nginx.conf:

head -500 nginx.conf

# Configuration checksum: 5482638367820671880

# setup custom paths that do not require root access
pid /tmp/nginx.pid;

daemon off;

worker_processes 10;

worker_rlimit_nofile 103833;

worker_shutdown_timeout 240s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    lua_package_path "/etc/nginx/lua/?.lua;;";

    lua_shared_dict balancer_ewma 10M;
    lua_shared_dict balancer_ewma_last_touched_at 10M;
    lua_shared_dict balancer_ewma_locks 1M;
    lua_shared_dict certificate_data 20M;
    lua_shared_dict certificate_servers 5M;
    lua_shared_dict configuration_data 20M;
    lua_shared_dict ocsp_response_cache 5M;

    init_by_lua_block {
        collectgarbage("collect")

        -- init modules
        local ok, res

        ok, res = pcall(require, "lua_ingress")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        lua_ingress = res
        lua_ingress.set_config({
            use_forwarded_headers = false,
            use_proxy_protocol = false,
            is_ssl_passthrough_enabled = false,
            http_redirect_code = 308,
        listen_ports = { ssl_proxy = "442", https = "443" },

            hsts = true,
            hsts_max_age = 15724800,
            hsts_include_subdomains = true,
            hsts_preload = false,
        })
        end

        ok, res = pcall(require, "configuration")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        configuration = res
        end

        ok, res = pcall(require, "balancer")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        balancer = res
        end

        ok, res = pcall(require, "monitor")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        monitor = res
        end

        ok, res = pcall(require, "certificate")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        certificate = res
        certificate.is_ocsp_stapling_enabled = false
        end

        ok, res = pcall(require, "plugins")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        plugins = res
        end
        -- load all plugins that'll be used here
    plugins.init({  })
    }

    init_worker_by_lua_block {
        lua_ingress.init_worker()
        balancer.init_worker()

        monitor.init_worker()

        plugins.run()
    }

    geoip_country       /etc/nginx/geoip/GeoIP.dat;
    geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
    geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
    geoip_proxy_recursive on;

    aio                 threads;
    aio_write           on;

    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  620s;
    keepalive_requests 100;

    client_body_temp_path           /tmp/client-body;
    fastcgi_temp_path               /tmp/fastcgi-temp;
    proxy_temp_path                 /tmp/proxy-temp;
    ajp_temp_path                   /tmp/ajp-temp;

    client_header_buffer_size       1k;
    client_header_timeout           60s;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;
    client_body_timeout             60s;

    http2_max_field_size            4k;
    http2_max_header_size           16k;
    http2_max_requests              1000;
    http2_max_concurrent_streams    128;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   128;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      256;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    limit_req_status                503;
    limit_conn_status               503;

    include /etc/nginx/mime.types;
    default_type text/html;

    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component;
    gzip_proxied any;
    gzip_vary on;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    # Additional available variables:
    # $namespace
    # $ingress_name
    # $service_name
    # $service_port
log_format upstreaminfo escape=json '{"ingress.time": "$time_iso8601", "ingress.namespace": "$namespace", "ingress.ingress_name": "$ingress_name", "ingress.remote_addr": "$proxy_protocol_addr", "ingress.x-forward-for": "$proxy_add_x_forwarded_for", "ingress.request_id": "$req_id", "ingress.remote_user": "$remote_user", "ingress.bytes_sent": "$bytes_sent", "ingress.request_time": $request_time, "ingress.status": "$status", "ingress.vhost": "$host", "ingress.request_proto": "$server_protocol", "ingress.path": "$uri", "ingress.request_query": "$args", "ingress.request_length": "$request_length", "ingress.method": "$request_method", "ingress.http_referrer": "$http_referer", "ingress.http_user_agent": "$http_user_agent", "ingress.upstream_addr": "$upstream_addr", "ingress.upstream_status": "$upstream_status", "ingress.upstream_request_time": "$upstream_response_time"}';

    map $request_uri $loggable {

        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;

    error_log  /var/log/nginx/error.log notice;

    resolver 10.2.16.10 valid=30s ipv6=off;

    # See https://www.nginx.com/blog/websocket-nginx
    map $http_upgrade $connection_upgrade {
        default          upgrade;

        # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
        ''               '';

    }

    # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
    # If no such header is provided, it can provide a random value.
    map $http_x_request_id $req_id {
        default   $http_x_request_id;

        ""        $request_id;

    }

    # Create a variable that contains the literal $ character.
    # This works because the geo module will not resolve variables.
    geo $literal_dollar {
        default "$";
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

    ssl_early_data off;

    # turn on session caching to drastically improve performance

    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve auto;

    # PEM sha: 9c6e3728edcd2e18c5f935d2146548338193f013
    ssl_certificate     /etc/ingress-controller/ssl/default-fake-certificate.pem;
    ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;

    proxy_ssl_session_reuse on;

    # Custom code snippet configured in the configuration configmap
    proxy_cache_path /tmp/nginx-cache levels=1:2 keys_zone=static-cache:2m max_size=100m inactive=7d use_temp_path=off;
    proxy_cache_key $scheme$proxy_host$request_uri;
    proxy_cache_lock on;
    proxy_cache_use_stale updating;

    map $request_uri $request_uri_path {
        "~^(?P<path>[^?]*)(\?.*)?$"  $path;
    }

    geo $toto_ip {
        default          0;
        X.X.X.X/22  1;
    }

    upstream upstream_balancer {
        server 0.0.0.1; # placeholder

        balancer_by_lua_block {
            balancer.balance()
        }

        keepalive 32;

        keepalive_timeout  60s;
        keepalive_requests 100;

    }

    # Cache for internal auth checks
    proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;

    # Global filters

    ## start server _
    server {
        server_name _ ;

        listen 80 default_server reuseport backlog=32768 ;
        listen 443 default_server reuseport backlog=32768 ssl http2 ;

        set $proxy_upstream_name "-";

        ssl_certificate_by_lua_block {
            certificate.call()
        }

        location / {

            set $namespace      "";
            set $ingress_name   "";
            set $service_name   "";
            set $service_port   "";
            set $location_path  "/";

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = false,
                    force_no_ssl_redirect = false,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }

            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}

            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }

            body_filter_by_lua_block {
            }

            log_by_lua_block {
                balancer.log()

                monitor.call()

                plugins.run()
            }

            access_log off;

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name "upstream-default-backend";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";

            client_max_body_size                    1m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;

            proxy_set_header X-Forwarded-For        $remote_addr;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;

            proxy_max_temp_file_size                1024m;

            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

        # health checks in cloud providers require the use of port 80
        location /healthz {

            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {

            allow 127.0.0.1;

            deny all;

            access_log off;
            stub_status on;
        }

    }
    ## end server _

server part:

server {
        server_name toto.example.com ;

        listen 80  ;
        listen 443  ssl http2 ;

        set $proxy_upstream_name "-";

        ssl_certificate_by_lua_block {
            certificate.call()
        }

                 ###CACHING FILE ATTEMP BUT SAME POOR PERFORMANCES
        location /app/dist/ {

            set $namespace      "toto";
            set $ingress_name   "toto";
            set $service_name   "toto";
            set $service_port   "80";
            set $location_path  "/app/dist/";

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }

            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}

            header_filter_by_lua_block {
                lua_ingress.header()
                                 plugins.run()
            }

            body_filter_by_lua_block {
            }

            log_by_lua_block {
                balancer.log()

                monitor.call()

                plugins.run()
            }

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name "toto-80";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";

            allow 10.2.64.0/18;
            allow X.X.X.X/22;
                         deny all;

            client_max_body_size                    1m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;

            proxy_set_header X-Forwarded-For        $remote_addr;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         on;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;
                         proxy_max_temp_file_size                1024m;

            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            add_header X-Frame-Options "";
            proxy_cache static-cache;
            proxy_cache_valid 404 10m;
            proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
            proxy_cache_bypass $http_x_purge;
            add_header X-Cache-Status $upstream_cache_status;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

        location / {

            set $namespace      "toto";
            set $ingress_name   "toto";
            set $service_name   "toto";
            set $service_port   "80";
            set $location_path  "/";

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    ssl_redirect = true,
                    force_no_ssl_redirect = false,
                    use_port_in_redirects = false,
                                  })
                balancer.rewrite()
                plugins.run()
            }

            # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
            # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
            # other authentication method such as basic auth or external auth useless - all requests will be allowed.
            #access_by_lua_block {
            #}

            header_filter_by_lua_block {
                lua_ingress.header()
                plugins.run()
            }

            body_filter_by_lua_block {
            }

            log_by_lua_block {
                balancer.log()

                monitor.call()

                plugins.run()
            }

            port_in_redirect off;

            set $balancer_ewma_score -1;
            set $proxy_upstream_name "toto-80";
            set $proxy_host          $proxy_upstream_name;
            set $pass_access_scheme  $scheme;

            set $pass_server_port    $server_port;

            set $best_http_host      $http_host;
            set $pass_port           $pass_server_port;

            set $proxy_alternative_upstream_name "";
                         allow 10.2.64.0/18;
            allow X.X.X.X/22;
            deny all;

            client_max_body_size                    1m;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $remote_addr;

            proxy_set_header X-Forwarded-For        $remote_addr;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;

            proxy_max_temp_file_size                1024m;

            proxy_request_buffering                 on;
            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_timeout             0;
            proxy_next_upstream_tries               3;

            proxy_pass http://upstream_balancer;

            proxy_redirect                          off;

        }

    }

Configmap :

http-snippet: |
    map $request_uri $request_uri_path {
      "~^(?P<path>[^?]*)(\?.*)?$"  $path;
    }

    geo $whitelist_ip {
      default          0;
      X.X.X.X/22       1;
    }
  keep-alive: "620"
  keep-alive-requests: "10000"
  ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
  log-format-escape-json: "true"
  log-format-upstream: '{"ingress.time": "$time_iso8601", "ingress.namespace": "$namespace",
    "ingress.ingress_name": "$ingress_name", "ingress.remote_addr": "$proxy_protocol_addr",
    "ingress.x-forward-for": "$proxy_add_x_forwarded_for", "ingress.request_id": "$req_id",
    "ingress.remote_user": "$remote_user", "ingress.bytes_sent": "$bytes_sent", "ingress.request_time":
    $request_time, "ingress.status": "$status", "ingress.vhost": "$host", "ingress.request_proto":
    "$server_protocol", "ingress.path": "$uri", "ingress.request_query": "$args",
    "ingress.request_length": "$request_length", "ingress.method": "$request_method",
    "ingress.http_referrer": "$http_referer", "ingress.http_user_agent": "$http_user_agent",
    "ingress.upstream_addr": "$upstream_addr", "ingress.upstream_status": "$upstream_status",
    "ingress.upstream_request_time": "$upstream_response_time"}'
  ssl-redirect: "true"
  whitelist-source-range: X.X.X.X/32
  worker-processes: "10"
vDMG commented 4 years ago

misconfiguration on bench