darold / pgbadger

A fast PostgreSQL Log Analyzer
http://pgbadger.darold.net/
PostgreSQL License
3.51k stars 349 forks source link

no data in the output file #712

Closed pgfan1024 closed 2 years ago

pgfan1024 commented 2 years ago

Hi,

I am struggling to generate an output html file with data. My logging GUCs are as below:

log_min_duration_statement = 0
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = default
lc_messages = 'en_US.UTF8'

Few PG log entries are as below (I have changed the IP address):

022-02-02 05:45:55 GMT [2561234]: user=postgres,db=postgres,app=[unknown],client=11.22.33.44 LOG: connection authorized: user=postgres database=postgres application_name=pgbench SSL enabled (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)
2022-02-02 05:45:55 GMT [2561234]: user=postgres,db=postgres,app=pgbench,client=11.22.33.44 LOG: duration: 2.529 ms statement: create table pgbench_history(tid int,bid int,aid int,delta int,mtime timestamp,filler char(22))
2022-02-02 05:45:56 GMT [2561234]: user=postgres,db=postgres,app=pgbench,client=11.22.33.44 LOG: duration: 1.376

pgbadger -v /tmp/test3.log DEBUG: pgBadger version 11.7. DEBUG: Output 'html' reports will be written to out.html DEBUG: Starting progressbar writer process DEBUG: Autodetected log format 'default' from /tmp/test3.log DEBUG: pgBadger will use log format default to parse /tmp/test3.log. DEBUG: timezone not specified, using -39600 seconds DEBUG: Processing log file: /tmp/test3.log DEBUG: Starting reading file "/tmp/test3.log"... DEBUG: Start parsing postgresql log at offset 0 of file "/tmp/test3.log" to 67258813 [========================>] Parsed 67258813 bytes of 67258813 (100.00%), queries: 0, events: 0 DEBUG: the log statistics gathering took: 2 wallclock secs ( 0.10 usr 0.01 sys + 0.48 cusr 0.02 csys = 0.61 CPU) DEBUG: Output 'html' reports will be written to out.html LOG: Ok, generating html report... DEBUG: building reports took: 0 wallclock secs ( 0.01 usr + 0.01 sys = 0.02 CPU) DEBUG: the total execution time took: 2 wallclock secs ( 0.11 usr 0.02 sys + 0.48 cusr 0.02 csys = 0.63 CPU)

darold commented 2 years ago

Well if I take your log sample I have the following:

pgbadger -f stderr test3.log
[========================>] Parsed 604 bytes of 604 (100.00%), queries: 1, events: 0
LOG: Ok, generating html report...

Can you try using `-f stderr please?

pgfan1024 commented 2 years ago

pgbadger -v /tmp/test3.log -f stderr DEBUG: pgBadger version 11.7. DEBUG: Output 'html' reports will be written to out.html DEBUG: pgBadger will use log format stderr to parse /tmp/test3.log. DEBUG: timezone not specified, using -39600 seconds DEBUG: Starting progressbar writer process DEBUG: Processing log file: /tmp/test3.log DEBUG: Starting reading file "/tmp/test3.log"... DEBUG: Start parsing postgresql log at offset 0 of file "/tmp/test3.log" to 67258813 [========================>] Parsed 67258813 bytes of 67258813 (100.00%), queries: 0, events: 0 DEBUG: the log statistics gathering took: 2 wallclock secs ( 0.00 usr 0.01 sys + 0.45 cusr 0.02 csys = 0.48 CPU) DEBUG: Output 'html' reports will be written to out.html LOG: Ok, generating html report... DEBUG: building reports took: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) DEBUG: the total execution time took: 2 wallclock secs ( 0.00 usr 0.01 sys + 0.45 cusr 0.02 csys = 0.48 CPU)

Also tried with a small set of logs entries however no luck. Not sure if it matters, I am using pgbadger on Mac installed using brew install pgbadger

darold commented 2 years ago

Ok, if you can try on a Linux machine maybe there is portability issues on Mac.

pgfan1024 commented 2 years ago

On a CentOS 7 VM. Does not seem to work.(the test.html has no data for queries or events)

pgbadger -v /tmp/test3.log -o test.html -f stderr DEBUG: pgBadger version 11.7. DEBUG: Output 'html' reports will be written to test.html DEBUG: pgBadger will use log format stderr to parse /tmp/test3.log. DEBUG: timezone not specified, using 0 seconds DEBUG: Starting progressbar writer process DEBUG: Processing log file: /tmp/test3.log DEBUG: Starting reading file "/tmp/test3.log"... DEBUG: Start parsing postgresql log at offset 0 of file "/tmp/test3.log" to 67258813 DEBUG: the log statistics gathering took: 2 wallclock secs ( 0.00 usr 0.00 sys + 0.39 cusr 0.02 csys = 0.41 CPU) DEBUG: Output 'html' reports will be written to test.html LOG: Ok, generating html report... DEBUG: building reports took: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) DEBUG: the total execution time took: 2 wallclock secs ( 0.00 usr 0.00 sys + 0.39 cusr 0.02 csys = 0.41 CPU)

It seems to be not showing the processed count for queries and events when comparing the DEBUG info with similar command on Mac.

darold commented 2 years ago

If you want send the bipz2 compressed log file to my private email gilles AT darold DOT net I will try to find what's going wrong but usually this is because of the wrong log line prefix or because there is no query in the log file.

pgfan1024 commented 2 years ago

Sure 'll do. Thanks a lot!

darold commented 2 years ago

How does your logs are generated? Are you running a PostgreSQL fork or one available in the cloud? Here you have tons of space characters at end of each log line and one character in front of each line which is the reason why pgbadger can't parse the log. You should review the way this log is generated you are loosing lot of disk space and of course pgbadger can not understand it natively.

You can fix your log using: perl -p -i -e 's/^ //; s/ +$//;' test3.log but better is to fix the source.

pgfan1024 commented 2 years ago

How does your logs are generated? Are you running a PostgreSQL fork or one available in the cloud?

It's vanilla Postgres 14 running on a linux VM.

Here you have tons of space characters at end of each log line and one character in front of each line which is the reason why pgbadger can't parse the log. You should review the way this log is generated you are loosing lot of disk space and of course pgbadger can not understand it natively.

This was using Postgres function to read a log file using psql: (\o and then select pg_read_file('logfilename')). Can this format be supported using--format logtype say sql format ?

You can fix your log using: perl -p -i -e 's/^ //; s/ +$//;' test3.log but better is to fix the source.

Thanks a lot for this! This solved the formatting issue.

pgfan1024 commented 2 years ago

You are right. The formatting has to be fixed in source and psql -qAtX fixes it. Thanks a lot!