Closed ajtwatching closed 9 months ago
pgbadger can use lot of memory depending of the log size and queries length, you should tune your kernel to allow perl to use more memory (vm.overcommit_ratio), you can also disable the OOM killer by setting vm.overcommit_memory to 2. If this is not enough you might want to increase the memory size.
Thanks for the input. I've bumped the memory on the VM to see how it goes. I was suspicious though as my daily log data is pretty consistent.
postgresql-20230908: 1012M total
postgresql-20230909: 144M total
postgresql-20230910: 304M total
postgresql-20230911: 1.1G total
postgresql-20230912: 1.1G total
postgresql-20230913: 1.2G total
postgresql-20230914: 1.1G total
postgresql-20230915: 959M total
postgresql-20230916: 163M total
postgresql-20230917: 255M total
postgresql-20230918: 924M total
postgresql-20230919: 805M total
postgresql-20230920: 1.1G total
postgresql-20230921: 1.4G total
postgresql-20230922: 1.3G total
postgresql-20230924: 274M total
postgresql-20230925: 1.4G total
I'll keep eyes on it towards the end of this week.
Last night's run peaked around 9GB memory.
Yeah, so it consumed all the additional memory again.
I, [2023-09-29T04:15:01.515640 #2265845] INFO -- : Running pgbadger /usr/bin/pgbadger -j4 --prefix '%t [%p-%l] %h %a %q%u@%d ' --exclude-appname 'pg_dump' --retention 24 -I -q prddb21/postgresql*log.gz -O /var/www/pgbadger/reports/prddb21
E, [2023-09-29T04:40:46.241583 #2265845] ERROR -- : Error prddb21 running pgbadger
Which correlates with the activity on the box with the memory spiking to exhaustion around 04h40.
I checked the log size/volume again, and they remain consistent across the days.
20230925: 1.4G total
20230926: 1.2G total
20230927: 1.3G total
20230928: 1.3G total
20230929: 1.2G total
I'll give it some more memory!
Having bumped the vm to 32GB, seems to be running without issue. Although still a little strange to me given the log size is consistent, however it seems just once a week that it gets really memory hungry... anyways will close this out. Thanks for the input.
Hi,
Running pgbadger (v12.0) daily to process posgresql logs from a bunch of servers. It seems weekly that the process starts sucking up a bunch of memory and is then subsequently killed by the OOM killer. I then manually re-run and most of the time it then runs through without issue (until next week).
Seems it's always the Friday/Sunday runs...
Versions:
pgbadger command is always the same, just sequentially cycle through a bunch of hosts.
INFO -- : Running pgbadger /usr/bin/pgbadger -j4 --prefix '%t [%p-%l] %h %a %q%u@%d ' --exclude-appname 'pg_dump' --retention 24 -I -q prddb30/postgresql*log.gz -O /var/www/pgbadger/reports/prddb30
VM has 16 cores and 16GB of memory. I was running with 12 threads, however dropped it back to 4 in case that was the issue (but still happening with 4).
node_exporter graph of the memory increasing to the limit. Most of the time the box is lucky to use 3GB of memory.
Any additional debugging I can perform?
Thanks.