unbit / uwsgi

uWSGI application server container
http://projects.unbit.it/uwsgi
Other
3.47k stars 692 forks source link

uwsgi socket/UDP JSON logger is broken if no other logger uses the same format #2456

Open klausman opened 2 years ago

klausman commented 2 years ago

Our own bug for tracking this is here: https://phabricator.wikimedia.org/T312550

$ uwsgi --version
2.0.18-debian

(aka 2.0.18-1 in Debian)

We have the following logging config for a uwsgi application:

log-encoder=format:local [${strftime:%%Y-%%m-%%dT%%H:%%M:%%S}] ${msgnl}
log-encoder=json:logstash {"@timestamp":"${strftime:%%Y-%%m-%%dT%%H:%%M:%%S}","type":"ores","logger_name":"uwsgi","host":"%h","level":"INFO","message":"${msg}"}
log-route=local .*
log-route=logstash .*
log-x-forwarded-for=true
logformat=[pid: %(pid)] %(addr) (%(user)) {%(vars) vars in %(pktsize) bytes} [%(ctime)] %(method) %(uri) => generated %(rsize) bytes in %(msecs) msecs (%(proto) %(status)) %(headers) headers in %(hsize) bytes (%(switches) switches on core %(core)) user agent "%(uagent)"
logger=local file:/srv/log/ores/main.log
logger=logstash socket:127.0.0.1:11514

To summarize, we have:

The second one goes to rsyslogd listening on port 11541, which then does some transformations and then sends the data to Kafka. This setup has worked for a long time, until we recently-ish updated to Debian Buster.

When we updated to Buster, the second logger broke quietly: the Kafka topics are empty. Local debugging shows that uwsgi is still sending datagrams to the relevant port, but they are all empty (Len=0):

ores1001 ~ $ sudo tshark -npi any udp and port 11514 or icmp 
Running as user "root" and group "root". This could be dangerous.
Capturing on 'any'
    1 0.000000000    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    2 0.127078761    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    3 1.195554323    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    4 1.568624915    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    5 1.864752749    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    6 2.060869629    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    7 2.123019457    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    8 2.475948194    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
    9 2.680618117    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
   10 2.753684924    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
   11 2.840620163    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
   12 3.224103044    127.0.0.1 → 127.0.0.1    UDP 44 43004 → 11514 Len=0
[...]

The way rsyslog is set up, it will not act on any of this, and thus, no messages are sent to Kafka.

While debugging this, I added an additional logger, using the same basic directives as the JSON/Logstash one, but logging to a local file. These are the added directives:

log-encoder=json:local2 {"@timestamp":"${strftime:%%Y-%%m-%%dT%%H:%%M:%%S}","type":"ores","logger_name":"uwsgi","host":"%h","level":"INFO","message":"${msg}"}
log-route=local2 .*
logger=local2 file:/srv/log/ores/json-debug.log

This will result in the correct data being logged to the file configured

The weird part: As soon as the third logger is added, the base JSON logger (to UDP) starts working. Datagrams as seen in Wireshark are no longer size 0, and the Logstash UI shows the topic receiving messages. Remove the third logger, and (after the remaining uwsgi process terminate), the messages are Len=0 again and nothing gets sent to Kafka.

I think this is a bug in uwsgi where JSON is not generated correctly unless it is written locally.

elukey commented 2 years ago

Gentle ping :) Any help?

klausman commented 2 years ago

After some experimentation, we have a found a workaround:

buffer-size=8192

Specific size of course likely depends on environment, and possibly on the size of the JSON generated.

It's of course still a bug, but at least we now know of a mitigation.