Closed pgfan1024 closed 2 years ago
Well if I take your log sample I have the following:
pgbadger -f stderr test3.log
[========================>] Parsed 604 bytes of 604 (100.00%), queries: 1, events: 0
LOG: Ok, generating html report...
Can you try using `-f stderr please?
pgbadger -v /tmp/test3.log -f stderr
DEBUG: pgBadger version 11.7.
DEBUG: Output 'html' reports will be written to out.html
DEBUG: pgBadger will use log format stderr to parse /tmp/test3.log.
DEBUG: timezone not specified, using -39600 seconds
DEBUG: Starting progressbar writer process
DEBUG: Processing log file: /tmp/test3.log
DEBUG: Starting reading file "/tmp/test3.log"...
DEBUG: Start parsing postgresql log at offset 0 of file "/tmp/test3.log" to 67258813
[========================>] Parsed 67258813 bytes of 67258813 (100.00%), queries: 0, events: 0
DEBUG: the log statistics gathering took: 2 wallclock secs ( 0.00 usr 0.01 sys + 0.45 cusr 0.02 csys = 0.48 CPU)
DEBUG: Output 'html' reports will be written to out.html
LOG: Ok, generating html report...
DEBUG: building reports took: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU)
DEBUG: the total execution time took: 2 wallclock secs ( 0.00 usr 0.01 sys + 0.45 cusr 0.02 csys = 0.48 CPU)
Also tried with a small set of logs entries however no luck. Not sure if it matters, I am using pgbadger on Mac installed using brew install pgbadger
Ok, if you can try on a Linux machine maybe there is portability issues on Mac.
On a CentOS 7 VM. Does not seem to work.(the test.html has no data for queries or events)
pgbadger -v /tmp/test3.log -o test.html -f stderr
DEBUG: pgBadger version 11.7.
DEBUG: Output 'html' reports will be written to test.html
DEBUG: pgBadger will use log format stderr to parse /tmp/test3.log.
DEBUG: timezone not specified, using 0 seconds
DEBUG: Starting progressbar writer process
DEBUG: Processing log file: /tmp/test3.log
DEBUG: Starting reading file "/tmp/test3.log"...
DEBUG: Start parsing postgresql log at offset 0 of file "/tmp/test3.log" to 67258813
DEBUG: the log statistics gathering took: 2 wallclock secs ( 0.00 usr 0.00 sys + 0.39 cusr 0.02 csys = 0.41 CPU)
DEBUG: Output 'html' reports will be written to test.html
LOG: Ok, generating html report...
DEBUG: building reports took: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU)
DEBUG: the total execution time took: 2 wallclock secs ( 0.00 usr 0.00 sys + 0.39 cusr 0.02 csys = 0.41 CPU)
It seems to be not showing the processed count for queries and events when comparing the DEBUG info with similar command on Mac.
If you want send the bipz2 compressed log file to my private email gilles AT darold DOT net I will try to find what's going wrong but usually this is because of the wrong log line prefix or because there is no query in the log file.
Sure 'll do. Thanks a lot!
How does your logs are generated? Are you running a PostgreSQL fork or one available in the cloud? Here you have tons of space characters at end of each log line and one character in front of each line which is the reason why pgbadger can't parse the log. You should review the way this log is generated you are loosing lot of disk space and of course pgbadger can not understand it natively.
You can fix your log using: perl -p -i -e 's/^ //; s/ +$//;' test3.log
but better is to fix the source.
How does your logs are generated? Are you running a PostgreSQL fork or one available in the cloud?
It's vanilla Postgres 14 running on a linux VM.
Here you have tons of space characters at end of each log line and one character in front of each line which is the reason why pgbadger can't parse the log. You should review the way this log is generated you are loosing lot of disk space and of course pgbadger can not understand it natively.
This was using Postgres function to read a log file using psql: (\o
and then select pg_read_file('logfilename'))
. Can this format be supported using--format logtype
say sql format ?
You can fix your log using: perl -p -i -e 's/^ //; s/ +$//;' test3.log but better is to fix the source.
Thanks a lot for this! This solved the formatting issue.
You are right. The formatting has to be fixed in source and psql -qAtX
fixes it. Thanks a lot!
Hi,
I am struggling to generate an output html file with data. My logging GUCs are as below:
Few PG log entries are as below (I have changed the IP address):
022-02-02 05:45:55 GMT [2561234]: user=postgres,db=postgres,app=[unknown],client=11.22.33.44 LOG: connection authorized: user=postgres database=postgres application_name=pgbench SSL enabled (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)
2022-02-02 05:45:55 GMT [2561234]: user=postgres,db=postgres,app=pgbench,client=11.22.33.44 LOG: duration: 2.529 ms statement: create table pgbench_history(tid int,bid int,aid int,delta int,mtime timestamp,filler char(22))
2022-02-02 05:45:56 GMT [2561234]: user=postgres,db=postgres,app=pgbench,client=11.22.33.44 LOG: duration: 1.376
pgbadger -v /tmp/test3.log
DEBUG: pgBadger version 11.7. DEBUG: Output 'html' reports will be written to out.html DEBUG: Starting progressbar writer process DEBUG: Autodetected log format 'default' from /tmp/test3.log DEBUG: pgBadger will use log format default to parse /tmp/test3.log. DEBUG: timezone not specified, using -39600 seconds DEBUG: Processing log file: /tmp/test3.log DEBUG: Starting reading file "/tmp/test3.log"... DEBUG: Start parsing postgresql log at offset 0 of file "/tmp/test3.log" to 67258813 [========================>] Parsed 67258813 bytes of 67258813 (100.00%), queries: 0, events: 0 DEBUG: the log statistics gathering took: 2 wallclock secs ( 0.10 usr 0.01 sys + 0.48 cusr 0.02 csys = 0.61 CPU) DEBUG: Output 'html' reports will be written to out.html LOG: Ok, generating html report... DEBUG: building reports took: 0 wallclock secs ( 0.01 usr + 0.01 sys = 0.02 CPU) DEBUG: the total execution time took: 2 wallclock secs ( 0.11 usr 0.02 sys + 0.48 cusr 0.02 csys = 0.63 CPU)