Closed nicolasfranck closed 1 year ago
In your PSGI app you could remember the time when it boots and in the request handler check the current uptime and then enable the PSGI harakiri mode, so Starman will recreate a new process for it. For example:
my $boot_time = time;
my $app = sub {
my $env = shift;
if (time - $boot_time > 10) {
$env->{'psgix.harakiri.commit'} = 1;
}
return [200, ["Content-Type", "text/plain"], ["Hi $$ - $boot_time\n"]];
};
This will kill the worker every 10 seconds. Note, if you run starman with --preload-app
then the $boot_time
will always be the same (because it's compiled in the parent) so you need to take that into account by checking its PID, like:
my %boot;
my $app = sub {
$boot{$$} ||= time;
my $env = shift;
if (time - $boot{$$} > 10) {
$env->{'psgix.harakiri.commit'} = 1;
}
return [200, ["Content-Type", "text/plain"], ["Hi $$ - $boot{$$}\n"]];
};
mm, but this depends on the worker being called. From your answer, I suspect that worker restarting is actually triggered by the worker itself, and not by the starman master, right?
The thing is: traffic changes. One moment there is a lot, another moment far less. That is why I thought to let the starman master kill process based on time..
Anyway, thanks for the information! ;-)
You could set min_servers
, min_spare_servers
and max_spare_servers
and that way Starman will dynamically reduce the number of workers when not busy. By default these numbers are set to the same as the workers count (minus one for spare server) so the worker count will always be constant, but Starman accepts these parameters so you can control the behavior closer to something like Apache.
Right now starman workers are killed after a preconfigured numbers of requests. This keeps the memory footprint healthy, as in perl, due to memory leaking, memory tends to increase a lot. Well at least, in projects I have worked with.
Unfortunately, during "quiet" periods, when the webserver does not receive many requests, this condition hardly ever happens, and we have to restart starman.
Wouldn't be handy to implement a worker timeout as a second condition? Well at least, after having server at least one request (for otherwise that restart loop would go on forever).
Unless there is a better option available in Starman (or in the parent module?)? The only thing I can think about, is lowering the option
--min-servers
(which is set to--workers
) so that the workers in less visited environments have more to do, and so hit max requests restart condition.Thanks in advance!