Closed timscott closed 9 years ago
Heroku is right, there is no way to get an accurate memory usage number. This issue is a duplicate of #6. FWIW unicorn worker killer has the exact same "bug".
Dang. Maybe we should add a note to the README that this gem is not for Heroku. It could save a lot of time.
I guess we have the same issue with puma_auto_tune
?
I guess I have no choice but to figure out what's causing gradual memory increase of my web workers. Ouch.
Perhaps I could use this gem in the meantime. What if I set a long frequency
(like 4 hours) and low ram
? It would simply cycle web workers, like unicorn. Anything wrong that that?
you can also set the value of your memory higher fwiw. I recommend https://github.com/schneems/derailed_benchmarks for tracking and killing memory leaks. 99% of the time it's not a memory leak, just memory inefficient code. I recommend Ruby 2.2 for memory optimizations.
I just deployed
puma_worker_killer
on Heroku. I have two dynos and two workers per dyno. I havelog_runtime_metrics
enabled which shows the memory usage per dyno. As you can see PumaWorkerKiller thinks I am using way more memory (546.97607421875 MB) than Heroku does (291.84 MB). I think Heroku is right. As a result, PumaWorkerKiller is killing my workers every minute.