collectiveidea / delayed_job_mongoid

Mongoid backend for delayed_job
MIT License
170 stars 120 forks source link

Incremental memory consumption #76

Closed rjurado01 closed 2 years ago

rjurado01 commented 6 years ago

System:

Problem

After some time delayed_job process is taking all memory in development environment.

It seems that problem ocurrs only when we set preload_models: true into mongoid configuration file.

It could be related with this issues of delayed_job:

johnnyshields commented 3 years ago

Try setting env var MALLOC_ARENA_MAX=2 and see if that helps. We applied that in production and it greatly tamed our memory usage.

johnnyshields commented 2 years ago

I'm closing this. The main problem is that DelayedJob itself is not memory efficient, it needs to implement Copy-on-Write (CoW) forking. I started a PR here for that but it's still WIP https://github.com/collectiveidea/delayed_job/pull/1160

johnnyshields commented 2 years ago

By the way, here's a script I use to avoid Kubernetes OOM killer, for anyone interested:

# in an initializer

# Add workaround for Kubernetes OOMKiller which SIGKILLs the process
# See: https://grosser.it/2017/02/02/ruby-on-kubernetes-memory-gc-oomkilled/
delayed_job = caller.last =~ %r{scripts?/delayed_job} || ((File.basename($PROGRAM_NAME) == 'rake') && ARGV[0].include?('jobs:work'))
k8s = OS.linux? && ENV.keys.any? {|k| k.start_with?('KUBERNETES') }
if delayed_job && k8s
  Thread.new do
    loop do
      used = Integer(File.read('/sys/fs/cgroup/memory/memory.usage_in_bytes')) / 1024 / 1024
      max = Integer(`cat /sys/fs/cgroup/memory/memory.stat | grep hierarchical_memory_limit`.split.last) / 1024 / 1024
      # puts "Ram: #{used}/#{max} MB"
      raise "Out of memory: #{used}/#{max} MB" if used + 5 >= max
      GC.start
      sleep 60
    end
  end
end