Closed stefankleff closed 6 years ago
Hello Stefan,
now we nearly release 0.4.0 I might have an interest in such an addition for 0.5.0. Forking is complex and difficult to test, but if it works I wonder if forking would reduce the memory footprint of processes.
I noticed then even with a memory_get_peak_usage() reporting ~25Mb of usage between jobs New Relic reports ~1.4Gb for 10 running processes. This post suggests forking might counter this...
Do you have thoughts on that? And what other advantages would you predict...?
Hi Bas,
I don't think that it provides an advantage regarding the overall memory footprint.
My use case: I'm running some intense calculations with lots of database queries in my jobs. I'm using ZF2 and Doctrine ORM. Often there is some kind of memory leak, like in Doctrine, and some object references are still stored somewhere after the job has finished. Therefore the garbage collector is unable to free up the memory again. This led to a unnecessary high memory usage.
I know that the cause is a problem in a third party lib and memory leaks should be fixed there, but we cannot assume that every lib developer has long running tasks in his mind and knows how the gc in php works.
Therefore I suggest, that a job execution should run in an own thread. So I doesn't matter if there is a memory leak or not: The thread is thrown away after the job processing is done.
Are you aware of the max_runs settings? Designed to halt the worker after a certain executions.
That said, since the process stops and supervisor (or whatever) needs to notice this and restart the complete stack this can take a relatively long time. Don't know how that compares to a fork but if it is significantly faster then this would be a useful addition.
I'll close this one as it more than two years old. Feel free to open new issue if ideas still relevant and / or interested in fixing! :)
Some time ago I added forking support: https://github.com/goalio/SlmQueue/tree/feature/fork
Regarding this I've two questions: