arches / whacamole

restart heroku dynos that run out of RAM instead of swapping to disk
115 stars 12 forks source link

Difference between other Ruby gems #16

Closed mkcode closed 8 years ago

mkcode commented 8 years ago

I have been running Puma with the puma-worker-killer gem, and it has been working well for me for some time. This gem seems to solve the problem as that gem. Can you provide some documentation about when one would want to use this gem as opposed to puma_worker_killer in the case of puma or unicorn_worker_kiler in the case of unicorn?

Much appreciated.

MrHubble commented 8 years ago

when one would want to use this gem as opposed to puma_worker_killer in the case of puma

@mkcode did you find an answer to your question?

arches commented 8 years ago

Sounds like those gems don't measure the RAM usage on Heroku, but just let you preset an interval to restart your workers. Whacamole parses the Heroku logs to restart individual dynos when they hit their memory limit.

MrHubble commented 8 years ago

Agree with closing this (as it's not really an issue), but it seems puma_worker_killer can measure RAM:

If you're not running on a containerized platform you can try to detect the amount of memory you're using and only kill Puma workers when you're over that limit. It may allow you to go for longer periods of time without killing a worker however it is more error prone than rolling restarts. Now on a regular basis the size of all Puma and all of it's forked processes will be evaluated and if they're over the RAM threshold will be killed. Don't worry Puma will notice a process is missing a spawn a fresh copy with a much smaller RAM footprint ASAP. https://github.com/schneems/puma_worker_killer

arches commented 8 years ago

Yes, it can, but not on Heroku. The puma_worker_killer docs have been updated to reflect this distinction and recommend whacamole for heroku, I'll update whacamole docs to recommend pwk for other platforms.

On Wednesday, July 27, 2016, MrHubble notifications@github.com wrote:

Agree with closing this (as it's not really an issue), but it seems puma_worker_killer can measure RAM:

If you're not running on a containerized platform you can try to detect the amount of memory you're using and only kill Puma workers when you're over that limit. It may allow you to go for longer periods of time without killing a worker however it is more error prone than rolling restarts. Now on a regular basis the size of all Puma and all of it's forked processes will be evaluated and if they're over the RAM threshold will be killed. Don't worry Puma will notice a process is missing a spawn a fresh copy with a much smaller RAM footprint ASAP. https://github.com/schneems/puma_worker_killer

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/arches/whacamole/issues/16#issuecomment-235752070, or mute the thread https://github.com/notifications/unsubscribe-auth/AAlH6LekznAfe8YbHYjS0UHmnT_YTckUks5qZ-k3gaJpZM4H0iEK .

MrHubble commented 8 years ago

Great, thanks for clarifying. I guess another difference may be that whacamole restarts the dyno whereas puma_worker_killer restarts the puma worker? Seeing as on Heroku you should use rolling restarts with puma_worker_killer then I'm not sure this would make any difference anyway.

Also, I believe whacamole should not be used with preboot whereas I think preboot is recommended (maybe) with puma_unicorn_killer.

Thanks for sharing this gem, it's appreciated.