kartoza / docker-pg-backup

A cron job that will back up databases running in a docker postgres container
GNU General Public License v2.0
452 stars 103 forks source link

The container is increasing memory consumption over time reaching to Gigabytes #85

Closed YuryHrytsuk closed 1 month ago

YuryHrytsuk commented 1 year ago

What is the bug or the crash?

pg_backup container uses increasingly more memory over time reaching to Gigabytes. image (1)

Steps to reproduce the issue

  1. Start pg_backup as a docker swarm service
  2. Keep pg_backup running for a month
  3. Check memory usage pattern over this monthly period

Versions

14-3.3

Additional context

We run pg_bakcup within docker swarm and use volumes to store backups

NyakudyaA commented 1 year ago

@YuryHrytsuk Have you tried to limit the memory using https://phoenixnap.com/kb/docker-memory-and-cpu-limit

YuryHrytsuk commented 1 year ago

@YuryHrytsuk Have you tried to limit the memory using https://phoenixnap.com/kb/docker-memory-and-cpu-limit

This will kill the container (exit_code 137). Since the memory consumption jumps at around 11:00 PM (i.e. during the backup being made) I am afraid we will loose a backup. Any limit will lead to enforced restart (less or more often depending on how big is the limit)

I'd also like to point out again that every memory consumption leap happens during the backup being made. Perhaps, it will help to understand what is the problem

NyakudyaA commented 1 year ago

How big are your database dumps, Maybe you could manually execute the container and run the following

free -m
/backups.sh
free -m

This should give an indication of whether the memory is released after executing the script or not. If the memory is not being releases we might need to find a way to clear the memory cache inside the container

YuryHrytsuk commented 1 year ago

How big are your database dumps, Maybe you could manually execute the container and run the following

free -m
/backups.sh
free -m

This should give an indication of whether the memory is released after executing the script or not. If the memory is not being releases we might need to find a way to clear the memory cache inside the container

Here we go

# free -mh
               total        used        free      shared  buff/cache   available
Mem:            31Gi        12Gi       1.4Gi       257Mi        17Gi        18Gi
Swap:             0B          0B          0B

# ./backups.sh 

# free -mh
               total        used        free      shared  buff/cache   available
Mem:            31Gi        12Gi       1.3Gi       257Mi        17Gi        18Gi
Swap:             0B          0B          0B

image

YuryHrytsuk commented 1 year ago

I executed the free command again in ~30 mins just in case

$free -mh
               total        used        free      shared  buff/cache   available
Mem:            31Gi        12Gi       1.6Gi       257Mi        17Gi        18Gi
Swap:             0B          0B          0B

image

YuryHrytsuk commented 1 year ago

Well, I apologize but the output of free command apparently does not make sense since it shows memory of the whole machine. However, Prometheus container_memory_usage_bytes metric still points to memory consumption increase

YuryHrytsuk commented 1 year ago

@NyakudyaA any updates on this matter?

YuryHrytsuk commented 1 month ago

My bad. Our backups indeed caused the memory increase