Closed rjurado01 closed 2 years ago
Try setting env var MALLOC_ARENA_MAX=2
and see if that helps. We applied that in production and it greatly tamed our memory usage.
I'm closing this. The main problem is that DelayedJob itself is not memory efficient, it needs to implement Copy-on-Write (CoW) forking. I started a PR here for that but it's still WIP https://github.com/collectiveidea/delayed_job/pull/1160
By the way, here's a script I use to avoid Kubernetes OOM killer, for anyone interested:
# in an initializer
# Add workaround for Kubernetes OOMKiller which SIGKILLs the process
# See: https://grosser.it/2017/02/02/ruby-on-kubernetes-memory-gc-oomkilled/
delayed_job = caller.last =~ %r{scripts?/delayed_job} || ((File.basename($PROGRAM_NAME) == 'rake') && ARGV[0].include?('jobs:work'))
k8s = OS.linux? && ENV.keys.any? {|k| k.start_with?('KUBERNETES') }
if delayed_job && k8s
Thread.new do
loop do
used = Integer(File.read('/sys/fs/cgroup/memory/memory.usage_in_bytes')) / 1024 / 1024
max = Integer(`cat /sys/fs/cgroup/memory/memory.stat | grep hierarchical_memory_limit`.split.last) / 1024 / 1024
# puts "Ram: #{used}/#{max} MB"
raise "Out of memory: #{used}/#{max} MB" if used + 5 >= max
GC.start
sleep 60
end
end
end
System:
Problem
After some time delayed_job process is taking all memory in development environment.
It seems that problem ocurrs only when we set
preload_models: true
into mongoid configuration file.It could be related with this issues of delayed_job: