fooinha / nginx-json-log

Highly configurable JSON format logging per Location - nginx logging module - aka. kasha 🍲
BSD 2-Clause "Simplified" License
42 stars 21 forks source link

http_json_log_req_body_limit issue #43

Closed haim-ari closed 3 years ago

haim-ari commented 3 years ago

@fooinha There is still an issue i'm having as a result of a large payload size.

While increasing http_json_log_req_body_limit to 512k (and 1M) solved the issue for < 512k characters payload, Now I'm trying to send a payload which is ~700k (690814)

What happens here is different:

I get HTTP Response Code of 204 (expected) or 502 with Postman tests.
But in both cases the event is not sent to kafka (instead of just being trimmed), and I see this in logs:

Ingress log

nginx-ingress-controller-zcvzm nginx-ingress-controller 46.11.31.2 - - [21/Jul/2021:13:43:57 +0000] "POST / HTTP/1.1" 204 0 "-" "PostmanRuntime/7.28.2" 690814 0.627 [prd-nj-geoedge-8080] [] 10.42.2.31:8080 0 0.072 204 aacf863741668e34bc617fa4da21e57f

Nginx log

2021/07/21 13:59:38 [warn] 17#0: *410 a client request body is buffered to a temporary file /usr/local/nginx/client_body_temp/0000000004, client: 10.42.9.0, server: localhost, request: "POST / HTTP/1.1", host: "geoedge.mydomain.com"

nginx debug log atached geo.log

This is the nginx.conf I use:

user  nobody;
worker_processes  auto;
daemon off;
worker_rlimit_nofile 100000;

events {
    worker_connections  100000;
    use epoll;
    multi_accept on;
}

http {
    aio_write on;
    client_body_in_single_buffer on;
    include       mime.types;
    default_type  application/octet-stream;
    # default_type application/json;

    #_id                    $request_id;
    json_log_format full_log '
       _date                  $time_iso8601;
       conn.serial            $connection;
       conn.requests          $connection_requests;
       conn.pipelined         $pipe;
       conn.info.rtt          $tcpinfo_rtt;
       conn.src.ip            $remote_addr;
       conn.src.port          $remote_port;
       conn.dst.ip            $server_addr;
       conn.dst.port          $server_port;
       http.req.protocol      $server_protocol;
       http.req.method        $request_method;
       http.req.path          $uri;
       http.req.args          $args;
       http.req.host          $host;
       http.req.content.type  $content_type;
       http.req.content.type  $content_length;
       http.req.headers       $http_json_log_req_headers;
       http.req.body          $http_json_log_req_body;
       http.req.time.taken.ms $request_time;
       http.req.completion    $request_completion;
       http.resp.headers      $http_json_log_resp_headers;
       http.resp.status       $status;
    ';

    gzip  on;
    proxy_ignore_client_abort on;
    proxy_headers_hash_max_size 4096;
    reset_timedout_connection off;
    client_body_timeout 10;
    send_timeout 60;
    keepalive_timeout 10;
    keepalive_requests 100000;

    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    client_max_body_size 100M;
    ignore_invalid_headers on;
    client_header_timeout  3m;
    connection_pool_size  256;
    client_header_buffer_size 4k;
    large_client_header_buffers 4 32k;
    request_pool_size  4k;
    output_buffers   4 32k;
    postpone_output  1460;

    sendfile on;
    tcp_nopush on;
    gzip_min_length 10240;
    gzip_comp_level 9;
    gzip_vary on;
    gzip_disable msie6;
    gzip_proxied expired no-cache no-store private auth;
    gzip_buffers 4 32k;
    gzip_types
        text/css
        text/javascript
        text/xml
        text/plain
        text/x-component
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        font/truetype
        font/opentype
        application/vnd.ms-fontobject
        image/svg+xml;

    server {
        aio_write on;
        proxy_headers_hash_max_size 4096;
        client_max_body_size 100M;
        listen 9876;
        access_log /dev/stdout;
        error_log /dev/stdout info;
        location / {
            return 204;
        }
    }

    json_log_kafka_brokers broker1.mydomain.com:9092;
    json_log_kafka_compression none;

    server {
        aio_write on;
        proxy_headers_hash_max_size 4096;
        client_max_body_size 100M;
        listen 8080;
        access_log /dev/null;
        error_log /dev/null;
        server_name localhost;

        # location = /basic_status {
        #     stub_status;
        # }

        location = /keepalive {
            access_log off;
            return 200;
        }

        location / {
            aio_write on;
            http_json_log_req_body_limit 1M;
            proxy_headers_hash_max_size 4096;
            client_max_body_size 100M;
            proxy_read_timeout 300s;
            proxy_max_temp_file_size 4096m;
            proxy_connect_timeout 75s;
            proxy_pass http://localhost:9876/;
            error_log  /dev/stdout  debug;

            json_log file:/tmp/full.log full_log;
            json_log kafka:EdgeTest full_log;
        }
    }
}

/usr/local/nginx/client_body_temp # ls -la

total 12
drwx------    2 nobody   root          4096 Jul 21 14:25 .
drwxr-xr-x    1 root     root          4096 Jul 21 14:21 ..
/usr/local/nginx/client_body_temp #

I cleared the file /tmp/full.log then I fired one request, and I can see the file size is 10K

I took the payload, and base64 decode it, and I can see that it is trimmed...

/tmp # ls -ltrh
total 12K
-rw-r--r--    1 root     root       10.0K Jul 21 14:30 full.log
/tmp # wc -m full.log
10209 full.log

Can you advise on this please ?

Originally posted by @haim-ari in https://github.com/fooinha/nginx-json-log/issues/42#issuecomment-884219507

fooinha commented 3 years ago

I know the reason for this.

haim-ari commented 3 years ago

I know the reason for this.

@fooinha OK, waiting for your update on this Thanks

fooinha commented 3 years ago

@haim-ari , this may be fixed with the new release #44 .

This release has some considerable changes, check release notes releases/tag/v0.0.8

haim-ari commented 3 years ago

@fooinha I've switched to use another project for now... https://github.com/segmentio/kafka-go

Thanks for the update