amphp / process

An async process dispatcher for Amp.
MIT License
235 stars 30 forks source link

v2 not working on FPM while v1 does #64

Closed Dando-Real-ITA closed 1 year ago

Dando-Real-ITA commented 2 years ago

Hi, testing Parallel v2 with fibers on php 8.1 FPM with a simple task service, but it cannot work anymore because the underlaying Process requires pcntl which is not available on FPM. The same code with Promises works on Parallel/Process v1

trowski commented 2 years ago

Hi, thanks for reporting this!

Does it work fine again if these lines are removed from PosixRunner? Not sure why that check was added, as pcntl isn't strictly required.

Dando-Real-ITA commented 2 years ago

I did try yesterday on a fork after posting the issue, it is not enough, the process is not properly started and I see on ‘htop’ that for a moment a “sh” command is started with no parameters On v1 when the parallel process is started, I see a long command starting with sh -c

The pcntl seems related to signals and I see in the code it is connected to the new revolt loop, I hope it is not used for internal parent-child communication and there is a way to make it work on fpm

trowski commented 2 years ago

I'm not able to reproduce this locally. FPM + amphp/process + amphp/parallel is working as I'd expect after removing the check for pcntl I mentioned above.

Is there some more information you can give me? How are you building the command being passed to Process or how are you using it in parallel?

Dando-Real-ITA commented 2 years ago

I am trying to use a Task on the Worker Pool, but it is possible I am not migrating correctly from using call and enqueue to the new async and submit

My goal is to execute 3 getText methods in parallel, by delegating the data load to a worker, then when data is ready to decode and return it. In v1 I have the generator that yields after enqueue, then when data is ready completes the method execution and returns the decoded value.

I am starting to see some results with

$execution = Worker\submit(new CacheTask("getObject", $data));
list($detailsclobEnc, $status) = $execution->getResult()->await();

executed inside a function called with

$futures[] = async($this->getText(...), "category", "privacypolicy", $c->cmsCd);
Future\await($futures);

But does not seem right to use the Execution class to extract the Future to await, and after execution the processes created are never killed ( unless they are long-living process that will receive further jobs? But it seems that if the job is short the process is killed, if it is longer it is not, and I can reproduce consistently by adding a sleep(1) to the task run).

The v1 code

list($detailsclobEnc, $status) = yield Worker\enqueue(new CacheTask("getObject", $data));

Called from

$promises["privacypolicy"] = call($this->getText(...), "category", "privacypolicy", $c->cmsCd);
$responses = Promise\wait(Promise\all($promises));

works and the process ends after return

Dando-Real-ITA commented 2 years ago

I have some other info, if I get the $pool and I run after the task execution:

$pool = \Amp\Parallel\Worker\workerPool();
$pool->kill();

I get zombie processes

But if I change to:

$pool->shutdown();

There are no zombies.

Dando-Real-ITA commented 2 years ago

For now I managed to avoid zombies with

register_shutdown_function(\Amp\parallel\Worker\workerPool()->shutdown(...));

I tried to await workers in the parallel pool destructor, but was not working

trowski commented 2 years ago

Can you please update to v2.0.0-beta.4 and check if this helps with the zombie processes? This will also effect worker pools, so hopefully the explicit call to WorkerPool::shutdown() will no longer be required. Note that ext-pcntl is required to properly collect child processes after they exit, so this still may not work on FPM, but the zombies should be removed once the FPM process exits.

But does not seem right to use the Execution class to extract the Future to await, and after execution the processes created are never killed

Worker pool processes are not stopped until the the pool is either shutdown or the parent process exits. This is so additional tasks can be sent to worker processes without needing to start a new process for each task. Workers should only be killed if a fatal error occurs in the worker, such as an out-of-memory error.

Dando-Real-ITA commented 2 years ago

I updated to latest commit 8574b14

So my start test case is with these 2 calls, which still work on the new code:

// Ensure worker pool shutdown
register_shutdown_function(\Amp\parallel\Worker\workerPool()->shutdown(...));
// Ensure loop shutdown
register_shutdown_function(\Revolt\EventLoop::run(...));

This leaves no zombie processes

If I leave only the

register_shutdown_function(\Revolt\EventLoop::run(...));

I get non zombie worker process that block php-fpm from receiving requests ( tested with max-child 3, after 3 requests I get blocked )

If I remove both lines, I get the zombie processes and no worker is ever reused. It is probably because of the way php-fpm works that handles requests top to bottom and every time it recreates the pool and the workers

So to me seems that keeping both lines is the best to ensure no zombie processes and ensure loop is concluded even in bad exception cases