darold / pgbadger

A fast PostgreSQL Log Analyzer
http://pgbadger.darold.net/
PostgreSQL License
3.52k stars 349 forks source link

No queries or events detected in stderr log #82

Closed andyparsons closed 11 years ago

andyparsons commented 11 years ago

Similar questions have been asked before, but none of the resolutions seem to apply for my case. I'm using PgBadger 3.1 on a production Posgres 9.1 instance. The log options in my conf are as follows:

log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_rotation_age = 0
client_min_messages = notice
log_min_messages = warning
log_min_error_statement = error
log_min_duration_statement = 0
log_checkpoints = on
log_connections = on
log_disconnections = on
log_duration = off
log_error_verbosity = verbose
log_hostname = on
log_line_prefix = '%t [%p]: [%l-1] db=%d,user=%u '
log_lock_waits = on
log_statement = 'none'
log_temp_files = 0

And when I run there are no events or queries detected:

ubuntu@db:/data/postgresql/pg_log$ pgbadger -v -f stderr -p -g postgresql-2013-03-27.log 
DEBUG: Output 'html' reports will be written to out.html
DEBUG: Starting to parse log file: postgresql-2013-03-27.log
DEBUG: Starting reading file postgresql-2013-03-27.log...
[==================>      ] Parsed  905078048 bytes of 1164689450 (77.71%), queries: 0, events: 0

I'm guessing I am making a noob mistake, but I'm not sure what to try here. Thanks. pgbadger is fantastic and I cannot wait to analyze my logs on this machine!

darold commented 11 years ago

Hi Andy,

Well, could you explain what you are expecting with the -p and -g command line option ? The -p option need a value, (it is strange that pgbadger don't warn about that) and is used to set the prefix that you have set in log_line_prefix. The -g option did not exists, perhaps you mean -G which is used to disable graphs.

In your case and following your configuration I recommend to simply run pgbadger like that:

pgbadger  postgresql-2013-03-27.log

it should works perfectly :-)

andyparsons commented 11 years ago

Thanks. p and g are only there because I was trying a number of options. Vanilla pgbadger unfortunately has the same behavior:

ubuntu@db:/data/postgresql/pg_log$ pgbadger -v postgresql-2013-03-27.log 
DEBUG: Autodetected log format 'stderr' from postgresql-2013-03-27.log
DEBUG: Output 'html' reports will be written to out.html
DEBUG: Starting to parse log file: postgresql-2013-03-27.log
DEBUG: Starting reading file postgresql-2013-03-27.log...
[==========>              ] Parsed  523849114 bytes of 1164689450 (44.98%), queries: 0, events: 0
darold commented 11 years ago

Could you post some lines of your log please ? Seems strange to me because your log_line_prefix is well formed.

andyparsons commented 11 years ago

Absolutely, here is a random snippet:

2013-03-28 11:52:53 UTC LOCATION:  exec_simple_query, postgres.c:1097
2013-03-28 11:52:53 UTC LOG:  00000: connection received: host=ip-10-32-5-253.ec2.internal port=53457
2013-03-28 11:52:53 UTC LOCATION:  BackendInitialize, postmaster.c:3474
2013-03-28 11:52:53 UTC LOG:  00000: duration: 0.071 ms  statement: show transaction isolation level
darold commented 11 years ago

Ok, I've remove most part of your log because I just need a few line to see the log format. And here you have a problem, you should reload your configuration file because it doesn't correspond to your log_line_prefix. You should have something like this:

2013-03-28 11:52:53 UTC [2346]: [1-1] LOG:  duration: 0.071 ms  statement: show transaction isolation level

Please reload postgresql and see if new lines have this format.

darold commented 11 years ago

I have omitted the user/db part so it should look like this:

2013-03-28 11:52:53 UTC [2346]: [1-1] db=mydb,user=myuser LOG:  duration: 0.071 ms  statement: show transaction isolation level
andyparsons commented 11 years ago

Ah, correct! Apparently reload is not working properly. Thanks for the quick attention.

amitrajpurkar commented 11 years ago

Hi,

I am having same issue as andyparsons had ... here are few sample lines from my log file..

2013-07-08 16:34:47 EST [25348]: [222] user=booka, db=postgres LOG: duration: 0.231 ms 2013-07-08 16:36:25 EST [25355]: [63] user=booka, db=track LOG: duration: 567.857 ms statement: SELECT s.shipx_pkey, s.shpxsndn, to_char(s.shpxcdat,'dd/mm/yyyy'), to_char(coalesce((select edate from trk t where s.shpxbl=t.hawb_bl AND event IN ('CTA', 'ETA') order by event, updated desc limit 1), s.shpxeta),'dd/mm/yyyy'), s.shpxvesl, s.shpxfl1, s.shpxcnam1, s.shpxsnam1, s.bracsgn, s.shpxcucl, a.consignee_email, a.broker_email, to_char(l.sent_timestamp, 'dd/mm/yyyy HH12:MIam'),

My postgres database version is 9.2.4 The logging settings in postgresql.conf are very similar to one andyparsons had log_min_duration_statement = 500 log_statement='none' log_line_prefix = '%t [%p]: [%l] user=%u,db=%d ' lc_messages='C'

The command i run:: pgbadger --dbname track --prefix '%t [%p]:[%l] user=%u,db=%d ' --outfile /var/www/html/pgbadger/index.html /data1/pgsql/data/pg_log/postgresql-Mon.log

any suggestions on what i could be missing?

darold commented 11 years ago

Hi,

Following your log sample, you have an extra space before the db field: user=booka, db=postgres

Your prefix should be written as:

'%t [%p]:[%l] user=%u, db=%d '

Regards,

amitrajpurkar commented 11 years ago

you are correct. .. that did fix my problem.. i made a note of that point.. thanks a lot for your prompt response.

psurana0329 commented 10 years ago

I am having the same issue: My logs look like this: 21 Nov 2013 14:22:56,991 com.datadomain.ddms.dblib.util.DDMSQuery.getResultList:82 [inventory.dc04d8424d1616ed1d297272f4b67ec35] INFO : select model as _value from cfg_system_nodes_view where cfg_system_nodes_view.system_id=34 Response Time: 1(ms) 21 Nov 2013 14:22:56,992 com.datadomain.ddms.dblib.util.DDMSQuery.getResultList:82 [inventory.dc04d8424d1616ed1d297272f4b67ec35] INFO : select ddos_version as _value from cfg_system_nodes_view where cfg_system_nodes_view.system_id=34 Response Time: 1(ms) 21 Nov 2013 14:22:56,993 com.datadomain.ddms.dblib.util.DDMSQuery.getResultList:82 [inventory.dc04d8424d1616ed1d297272f4b67ec35] INFO : select node_domain_name as _value from cfg_system_nodes_view where cfg_system_nodes_view.system_id=34 Response Time: 1(ms)

in my posgres.conf i have log_line_prefix='%m %a (%p) '

this is what i pass: [root@suranp-dl Desktop]# pgbadger -o logit1.html -p string'%m %a (%p) ' -f stderr ddms.info [========================>] Parsed 3203846 bytes of 3203846 (100.00%), queries: 0, events: 0 LOG: Ok, generating html report...

darold commented 10 years ago

HI,

What you are trying to parse is not a valid PostgreSQL log. That seems to be a log from your Jave driver. You can verify that with the log line prefix that do not correspond at all. Please use log file generated by PostgreSQL.

Regards,

psurana0329 commented 10 years ago

Darold Now i am trying the same with a valid postgres log yet i don't see anything in the report:

Below is the sample from the pg log.

2013-11-27 03:02:17.061 PST psql (5555) LOG: duration: 15380.602 ms statement: select pgstatspack_delete_snap (); 2013-11-27 14:16:51.889 PST pgAdmin III - Query Tool (1070) ERROR: syntax error at or near "//" at character 1062 2013-11-27 14:16:51.889 PST pgAdmin III - Query Tool (1070) STATEMENT: insert into cd_cfg_tenant_units (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib, expected_growth_pct,growth_time_period_in_months) values ('100_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Finance',4,4,1,1,1,'Mary Janes','mjanes@emc.com',100,50,6);

    insert into cd_cfg_tenant_units
    (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
    expected_growth_pct,growth_time_period_in_months)
    values
    ('101_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Engineering',4,4,1,1,1,'Joe Bloggs','jbloggs@emc.com',120,30,9);

    insert into cd_cfg_tenant_units
    (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
    expected_growth_pct,growth_time_period_in_months)
    values
    ('102_tenantunituuid',2,extract (epoch from CURRENT_TIMESTAMP),'Payroll',4,4,1,1,1,'Tom Thumbs','tthumbs@emc.com',180,20,12);

    // Unassigned tenant unit
    insert into cd_cfg_tenant_units
    (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
    expected_growth_pct,growth_time_period_in_months)
    values
    ('103_tenantunituuid',null,extract (epoch from CURRENT_TIMESTAMP),'HR',4,4,1,1,1,'Billy Bob','bbob@emc.com',150,30,3);

2013-11-27 14:17:02.906 PST pgAdmin III - Query Tool (1070) ERROR: insert or update on table "cd_cfg_tenant_units" violates foreign key constraint "cd_cfg_tenant_units_node_id_fkey" 2013-11-27 14:17:02.906 PST pgAdmin III - Query Tool (1070) DETAIL: Key (node_id)=(4) is not present in table "cfg_nodes_indices". 2013-11-27 14:17:02.906 PST pgAdmin III - Query Tool (1070) STATEMENT: insert into cd_cfg_tenant_units (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib, expected_growth_pct,growth_time_period_in_months) values ('100_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Finance',4,4,1,1,1,'Mary Janes','mjanes@emc.com',100,50,6);

    insert into cd_cfg_tenant_units
    (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
    expected_growth_pct,growth_time_period_in_months)
    values
    ('101_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Engineering',4,4,1,1,1,'Joe Bloggs','jbloggs@emc.com',120,30,9);

    insert into cd_cfg_tenant_units
   (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
    expected_growth_pct,growth_time_period_in_months)
    values

"postgresqllog-Wed.log" 84L, 5610C

in my posgres.conf i have log_line_prefix='%m %a (%p) '

this is what i pass: [root@suranp-dl Desktop]# pgbadger -o logit1.html -p string'%m %a (%p) ' -f stderr ddms.info [========================>] Parsed 3203846 bytes of 3203846 (100.00%), queries: 0, events: 0 LOG: Ok, generating html report...

darold commented 10 years ago

Ok,

pgAdmin is the only application I know that need more than one word as application name and this is the source of the issue. Please execute the following command on you log file:

 sed -i 's/pgAdmin III - Query Tool/pgAdmin/ ddms.info

then rerun your pgbadger command line.

psurana0329 commented 10 years ago

Darold,

Can you please elaborate on how to execute this on my log file.

psurana0329 commented 10 years ago

I am getting the following error when i execute the above mentioned cli: [suranp1@suranp-dl /]$ sed -i 's/pgAdmin III - Query Tool/pgAdmin/ postgresqllog-Wed.log' sed: -e expression #1, char 38: unknown option to `s' [suranp1@suranp-dl /]$

darold commented 10 years ago

Hum, sorry there was missing missing a single quote and you put it at the wrong place. Here is the command to execute:

sed -i 's/pgAdmin III - Query Tool/pgAdmin/' postgresqllog-Wed.log

Regards,

psurana0329 commented 10 years ago

Darold, I ran the command mentioned above and then ran pgbadger again and still it says 0 queries and 0 events. [suranp1@suranp-dl Desktop]$ pgbadger -o examplereport.html --prefix string'%m %a (%p) ' postgresqllog-Wed.log [========================>] Parsed 5610 bytes of 5610 (100.00%), queries: 0, events: 0 LOG: Ok, generating html report…

Please let me know what wrong i am doing? Thanks a lot for your help.

psurana0329 commented 10 years ago

Sample Log file:

2013-11-27 03:02:17.061 PST psql (5555) LOG: duration: 15380.602 ms statement: select pgstatspack_delete_snap (); 2013-11-27 14:16:51.889 PST pgAdmin III - Query Tool (1070) ERROR: syntax error at or near "//" at character 1062 2013-11-27 14:16:51.889 PST pgAdmin III - Query Tool (1070) STATEMENT: insert into cd_cfg_tenant_units (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib, expected_growth_pct,growth_time_period_in_months) values ('100_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Finance',4,4,1,1,1,'Mary Janes','mjanes@emc.com',100,50,6);

insert into cd_cfg_tenant_units
(uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
expected_growth_pct,growth_time_period_in_months)
values
('101_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Engineering',4,4,1,1,1,'Joe Bloggs','jbloggs@emc.com',120,30,9);

insert into cd_cfg_tenant_units
(uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
expected_growth_pct,growth_time_period_in_months)
values
('102_tenantunituuid',2,extract (epoch from CURRENT_TIMESTAMP),'Payroll',4,4,1,1,1,'Tom Thumbs','tthumbs@emc.com',180,20,12);

// Unassigned tenant unit
insert into cd_cfg_tenant_units
(uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
expected_growth_pct,growth_time_period_in_months)
values
('103_tenantunituuid',null,extract (epoch from CURRENT_TIMESTAMP),'HR',4,4,1,1,1,'Billy Bob','bbob@emc.com',150,30,3);

2013-11-27 14:17:02.906 PST pgAdmin III - Query Tool (1070) ERROR: insert or update on table "cd_cfg_tenant_units" violates foreign key constraint "cd_cfg_tenant_units_node_id_fkey" 2013-11-27 14:17:02.906 PST pgAdmin III - Query Tool (1070) DETAIL: Key (node_id)=(4) is not present in table "cfg_nodes_indices". 2013-11-27 14:17:02.906 PST pgAdmin III - Query Tool (1070) STATEMENT: insert into cd_cfg_tenant_units (uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib, expected_growth_pct,growth_time_period_in_months) values ('100_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Finance',4,4,1,1,1,'Mary Janes','mjanes@emc.com',100,50,6);

insert into cd_cfg_tenant_units
(uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib,
expected_growth_pct,growth_time_period_in_months)
values
('101_tenantunituuid',1,extract (epoch from CURRENT_TIMESTAMP),'Engineering',4,4,1,1,1,'Joe Bloggs','jbloggs@emc.com',120,30,9);

insert into cd_cfg_tenant_units

(uuid, tenant_id, epoch, name,system_id,node_id,state,self_service_enabled,created_from_ddmc,admin_name,admin_email,dataset_size_in_gib, expected_growth_pct,growth_time_period_in_months) values

darold commented 10 years ago

Please download latest development code, there's a patch to handle application name with space like "pgAdmin III - Query Tool". It will works with you log now.

Also, remove the word string from you command line, use :

pgbadger -o examplereport.html --prefix '%m %a (%p) ' postgresqllog-Wed.log

Let us know,

Regards,

psurana0329 commented 10 years ago

do i need to install the latest version of pgbadger or is there a way to upgrade pgbadger

darold commented 10 years ago

You can download pgBadger from this url https://github.com/dalibo/pgbadger/ using the "Download ZIP" button at bottom right of the page.

Then you can choose to install it again, as root:

perl Makefile.PL
make
make install

or just override the perl program pgbadger that may be installed into /usr/loca/bin/pgbadger

Regards,

psurana0329 commented 10 years ago

Darold,

Is there a way we could pass multiple log files and generate one pgbadger report

darold commented 10 years ago

Hi,

Yes, this is one of the strength of pgbadger. For example, reusing your previous command line:

pgbadger -o examplereport.html --prefix '%m %a (%p) ' /var/log/postgresql/*.log

will parse all files in directory /var/log/postgresql/ with the .log extension. If you have compressed log too in this directory, you can proceed as follow:

pgbadger -o examplereport.html --prefix '%m %a (%p) ' /var/log/postgresql/*.log /var/log/postgresql/*.log.gz

And if you want to parse those log file at full speed, use the -J option with the number of CPU cores to use.

Is that what you are looking for ?

krish050591 commented 10 years ago

my log files: 2014-08-06 14:34:30 IST [13739]:[2-1] db=[unknown],user=[unknown] LOCATION: BackendInitialize, postmaster.c:3857 2014-08-06 14:34:30 IST [13739]:[3-1] db=p360_slicdev_demo,user=postgres LOG: 00000: connection authorized: user=postgres database=p360_slicdev_demo 2014-08-06 14:34:30 IST [13739]:[4-1] db=p360_slicdev_demo,user=postgres LOCATION: PerformAuthentication, postinit.c:239 2014-08-06 14:34:53 IST [13739]:[5-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_dispatch; 2014-08-06 14:34:53 IST [13739]:[6-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890 2014-08-06 14:34:53 IST [13739]:[7-1] db=p360_slicdev_demo,user=postgres LOG: 00000: duration: 1.127 ms 2014-08-06 14:34:53 IST [13739]:[8-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:1118 2014-08-06 14:35:07 IST [13739]:[9-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_group_role; 2014-08-06 14:35:07 IST [13739]:[10-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890 2014-08-06 14:35:07 IST [13739]:[11-1] db=p360_slicdev_demo,user=postgres LOG: 00000: duration: 1.644 ms 2014-08-06 14:35:07 IST [13739]:[12-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:1118 2014-08-06 14:35:14 IST [13739]:[13-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_group_rptg_role; 2014-08-06 14:35:14 IST [13739]:[14-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890 2014-08-06 14:35:14 IST [13739]:[15-1] db=p360_slicdev_demo,user=postgres LOG: 00000: duration: 1.040 ms 2014-08-06 14:35:14 IST [13739]:[16-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:1118 2014-08-06 14:35:26 IST [13739]:[17-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_groupadmin; 2014-08-06 14:35:26 IST [13739]:[18-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890 2014-08-06 14:35:26 IST [13739]:[19-1] db=p360_slicdev_demo,user=postgres LOG: 00000: duration: 1.297 ms 2014-08-06 14:35:26 IST [13739]:[20-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:1118 2014-08-06 14:35:36 IST [13739]:[21-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_m_user; 2014-08-06 14:35:36 IST [13739]:[22-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890 2014-08-06 14:35:36 IST [13739]:[23-1] db=p360_slicdev_demo,user=postgres LOG: 00000: duration: 1.441 ms 2014-08-06 14:35:36 IST [13739]:[24-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:1118 2014-08-06 14:35:42 IST [13739]:[25-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_m_file; 2014-08-06 14:35:42 IST [13739]:[26-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890 2014-08-06 14:35:42 IST [13739]:[27-1] db=p360_slicdev_demo,user=postgres LOG: 00000: duration: 1.055 ms 2014-08-06 14:35:42 IST [13739]:[28-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:1118 2014-08-06 14:35:49 IST [13739]:[29-1] db=p360_slicdev_demo,user=postgres LOG: 00000: statement: select * from p360_status; 2014-08-06 14:35:49 IST [13739]:[30-1] db=p360_slicdev_demo,user=postgres LOCATION: exec_simple_query, postgres.c:890


command: pgbadger PATH/.logfile

getting this : Parsed 6901 bytes of 6901 (100.00%), queries: 0, events: 0 LOG: Ok, generating html report...

krish050591 commented 10 years ago

There are queries in the logfile but in the html file we are getting 0 queries and events. Please suggest a solution.

darold commented 10 years ago

Hum, do your really need debug level information ?

Here I see two problems, the first one it seem that you don't have a space character between the pid and the session line, for example [13739]:[28-1] should be [13739]: [28-1], or set the prefix at command line. The second problem is that you have not read the documentation before posting, please review the postgresql.conf parameters as explain in the doc.

Regards,

rahulsinghm01 commented 10 years ago

Hi , I too have a similar issue. Here is my log file: 2014-09-16 15:32:08 MDT [27355]:[73538] user=abc, db=abcdb LOG: execute : SELECT Products.productID,Products.productNAME FROM Products WHERE Products.productID='3484494948' AND Products.loc='S' 2014-09-16 15:32:08 MDT [27355]:[73539] user=abc, db=abcdb LOG: duration: 0.839 ms 2014-09-16 15:32:08 MDT [27355]:[73540] user=abc, db=abcdb LOG: duration: 0.385 ms

under postgresql.conf file: log_line_prefix = '%t [%p]:[%l] user=%u, db=%d '

Not detecting any queries neither events.

Thanks in advance.

darold commented 10 years ago

No, the issue is that you don't RTFM. Your log_line_prefix is good, but you have to disable log_duration and log_statement and then activate log_min_duration_statement instead.

pcav commented 8 years ago

Thanks. I have:

log_duration = off

log_statement = 'none' # none, ddl, mod, all

log_min_duration_statement = 0

zombiekillaa commented 5 years ago

I ran the following command to parse my log file- ./pgbadger --prefix '%t - %h - %d - %a ' -f stderr -x json --prettify-json /Users/postgre.log It's output is as shown below - [========================>] Parsed 282332144 bytes of 282332144 (100.00%), queries: 0, events: 1

When i parsed in HTML format it does not show the actual query distribution on the HTML page, but it is showing the query time histogram of the various queries.

Sample of logfile - 2019-07-20 08:46:29 PDT - 10.163.33.30 - dbname - [unknown] LOG: duration: 1045.242 ms statement: SELECT COUNT(*) FROM "builder_build" 2019-07-20 08:45:55 PDT - 10.163.33.30 - dbname - [unknown] LOG: duration: 2328.279 ms statement: SELECT "builder_deliverable"."vmodl_id", "builder_deliverable"."targetchangeset" FROM "builder_deliverable" 2019-07-20 08:45:55 PDT - 10.163.33.30 - dbname - [unknown] LOG: duration: 3370.267 ms statement: SELECT "tester_testrun"."location_id", "tester_testrun"."imagefilename"FROM "tester_testrun"

What should be done so that the different queries can also be identified by pgbadger.

Thanks in advance.

darold commented 5 years ago

Where is the %p placeholder in your prefix? Documentation says:

    pgBadger supports any custom format set into the log_line_prefix
    directive of your postgresql.conf file as long as it at least specify
    the %t and %p patterns.
zombiekillaa commented 5 years ago

Thanks @darold , i changed the log_line prefix directive of postgresql.conf and new logs with %p placeholder worked fine for me.

macfredd commented 4 years ago

It is probably something obvious but in case you have your logs in languages ​​other than English, it will not work, I had mine in Spanish so I have to change it to English

locale for system error message

lc_messages = 'en_US.utf8'

ajaimesb commented 3 years ago

Hi, I am trying to parse my log file and I am getting 0 Queries:

pgbadger.bat -p "%t:%r:%u@%d:[%p]:" -o C:\temp\Performance2.html C:\temp\postgresql.log.2021-02-01-15 [========================>] Parsed 399 bytes of 399 (100.00%), queries: 0, events: 0

This is an example of the log file

2021-02-01 15:22:21 UTC:10.4.1.97(53581):dbuser@dbname:[18449]:LOG: duration: 7463.050 ms bind : SELECT * FROM MyTable WHERE field1 = $1 and field2 = $2 and field3 = $3 and field4 = $4 and field5 = $5 2021-02-01 15:22:21 UTC:10.4.1.97(53581):dbuser@dbname:[18449]:DETAIL: parameters: $1 = '4118000000003184216', $2 = '4118000000003184146', $3 = '1', $4 = '4118000000003185789', $5 = '8'

This is the configuration of the server:

log_line_prefix %t:%r:%u@%d:[%p]: log_min_duration_statement 3s log_min_error_statement ERROR log_duration OFF log_statement NONE

Thanks in advance

darold commented 3 years ago

I think there is too few log entries in your log file and bind statement are not taken in account if there is no execute trace following.

ajaimesb commented 3 years ago

Am I missing a configuration parameter? the only thing I see in the log is this : 2021-02-01 16:24:06 UTC:10.4.1.97(53625):user@dbname:[21465]:LOG: duration: 7497.559 ms bind : SELECT
... (Query) 2021-02-01 16:24:06 UTC:10.4.1.97(53625):user@dbname:[21465]:DETAIL: parameters: $1 = '4118000000003184416', $2 = '4118000000003184371', $3 = '1', $4 = '4118000000003185964', $5 = '13'

I can't see an execute log line

Thank in advance

darold commented 3 years ago

Yes, actually you have only this bind statement that takes more then 3 seconds, you can try to decrease the log_min_duration_statement configuration directive to 1s for example to see if there is more statements logged.

nisha1uto commented 6 months ago

Hi i got this log error can you resolve it INIT_REPORT Init Duration: 10001.79 ms Phase: init Status: timeout [> ] Parsed 0 bytes of 14940414 (0.00%), queries: 0, events: 0 [> ] Parsed 62534 bytes of 14940414 (0.42%), queries: 0, events: 0 [> ] Parsed 601913 bytes of 14940414 (4.03%), queries: 0, events: 0 [=========> ] Parsed 6175143 bytes of 14940414 (41.33%), queries: 1, events: 0 [========================>] Parsed 14940414 bytes of 14940414 (100.00%), queries: 29, events: 0 LOG: Ok, generating txt report... /tmp/slack_sdk/web/client.py:3532: UserWarning: Although the channels parameter is still supported for smooth migration from legacy files.upload, we recommend using the new channel parameter with a single str value instead for more clarity. warnings.warn( Contents of /tmp/ directory: ['pandas', 'slack_sdk', 'six.py', 'pip-build-tracker-hszykz3q', 'slack_sdk-3.27.1.dist-info', 'pip-unpack-rekzwihv', 'numpy.libs', 'pytz', 'pip-unpack-4_v0fc25', 'pandas-2.0.3.dist-info', 'pip-unpack-i6fb29_1', 'tzdata-2024.1.dist-info', 'pip-unpack-2wtm5bfl', 'python_dateutil-2.9.0.post0.dist-info', 'slack', 'pycache', 'numpy-1.24.4.dist-info', 'bin', 'pip-target-sb8t8h0t', 'numpy', 'pytz-2024.1.dist-info', 'pip-unpack-_60nn367', 'pip-ephem-wheel-cache-jitzkmi7', 'pip-install-4o0q7fuk', 'dateutil', 'six-1.16.0.dist-info', 'tzdata', 'pip-unpack-uv9s74nd', 'pip-unpack-vr0vi9zd', 'pip-unpack-xo9lbc1m'] Combined slow queries posted successfully to Slack! INIT_REPORT Init Duration: 83821.15 ms Phase: invoke Status: error Error Type: Runtime.ExitError START RequestId: 7f0ec700-d116-4d27-a460-d67fbb644a60 Version: $LATEST RequestId: 7f0ec700-d116-4d27-a460-d67fbb644a60 Error: Runtime exited without providing a reason Runtime.ExitError END RequestId: 7f0ec700-d116-4d27-a460-d67fbb644a60 REPORT RequestId: 7f0ec700-d116-4d27-a460-d67fbb644a60 Duration: 83840.03 ms Billed Duration: 83841 ms Memory Size: 300 MB Max Memory Used: 300 MB