nginx / unit

NGINX Unit - universal web app server - a lightweight and versatile open source server that simplifies the application stack by natively executing application code across eight different programming language runtimes.
https://unit.nginx.org
Apache License 2.0
5.28k stars 324 forks source link

Not terminated php process after timeout #408

Open bashkarev opened 4 years ago

bashkarev commented 4 years ago

Not terminated php process after timeout,

conf.json

{
  "listeners": {
    "*:9000": {
      "application": "index"
    }
  },
  "applications": {
    "index": {
      "type": "php",
      "user": "www-data",
      "group": "www-data",
      "processes": {
        "max": 10,
        "spare": 1
      },
      "root": "/var/www/html/public",
      "index": "index.php"
    }
  },
  "settings": {
    "http": {
      "idle_timeout": 10,
      "send_timeout": 10
    }
  }
}

index.php

<?php

$stdout = fopen('php://stderr', 'wb');
$i = 0;
while (true) {
    fwrite($stdout, $i . '> connection_status: ' . connection_status() . PHP_EOL);
    sleep(1);
    ++$i;
}
mar0x commented 4 years ago

Hello,

  1. According to Settings documentation, idle_timeout and send_timeout does not limits the request processing time. Use timeout option in application limits group to limit client wait time (see Request Limits).
  2. Currently Unit tries only to stop applications gracefully and does not kill it with signal if it hangs. Try to avoid infinite loops for now.
bashkarev commented 4 years ago

Are you planning to add signal controls? for :

mar0x commented 4 years ago

Yes

teodorescuserban commented 1 year ago

Are you planning to add signal controls? for :

  • frozen process
  • client disconnection

Hopefully soon. Thank you!

teodorescuserban commented 1 year ago

@mar0x, I tried to find out what exactly means "Currently Unit tries only to stop applications gracefully" to make sure I change the application so it will actually stop. This would be great to have until you guys are able to implement the kill.

Thanks!

ghost commented 1 year ago

@teodorescuserban Hi, I'll work on a more detailed explanation of what exactly happens there.

teodorescuserban commented 1 year ago

@teodorescuserban Hi, I'll work on a more detailed explanation of what exactly happens there.

@artemkonev any news on this? Thank you!

ghost commented 1 year ago

Hi @teodorescuserban , I'm about to post an update in a week or two.

ghost commented 1 year ago

@teodorescuserban hi! Seems the answer is, unfortunately, 'not much.' I added what little details there are here:

https://unit.nginx.org/configuration/#request-limits

teodorescuserban commented 1 year ago

@artemkonev I am sorry to hear there is no way to notify the worker that unit is cancelling the request. I guess the worker will have to manage its subsequent processes and make sure it doesn't go over the unit timeout threshold. Hopefully in the future unit will send some signal to the worker and in that case it will be up to the worker to react on it and kill whatever child it has. Thank you for looking into this.

malsatin commented 1 month ago

Hi, are there any standard solution for that problem?

I understand, that it seems obvious that you should just "Try to avoid infinite loops for now", but imagine, that the problem is not in "infinite loops", but in the infinitely hanging SQL requests.

For example our configuration

"admin": {
    "type": "php",
    "root": "/var/www/app/backend/web/",
    "script": "index.php",
    "processes": 8,
    "limits": {
        "requests": 10000,
        "timeout": 30
    }
}

In our case

It would be perfect to somehow be able to force kill such hanging processes.

jeffdafoe commented 1 month ago

Set a max_execution_time in your php.ini .

malsatin commented 1 month ago

@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time.

Did it work in your case?

ac000 commented 1 month ago

@malsatin

From here

Note: The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.

So it looks like it won't abort your database query...

If your using MySQL (I guess this also applies to MariaDB and other DBs may have similar) then it seems you can do something like

SELECT 
/*+ MAX_EXECUTION_TIME(30000) */
*
FROM table;

The timeout value is in milliseconds.

malsatin commented 1 month ago

Thanks, @ac000, great point!

We will check out that feature.

teodorescuserban commented 1 month ago

The solution for now, as far as I was able to help implement on our side, is for the backend script (in our case a python api) to have its own timeout and if reached, make sure it does the cleaning (kill the child processes, abort connections to the database etc). This is not an easy task and depends on what you have in the backend. Unfortunately, because we are trying to have a decent timeout (for the heavier parts of the api), we would have to wait longer to do the cleanup for every failed closed connection made from the client to the nginx unit. This at least seems to make the backend script vulnerable to DoS when you make a lot of requests and close them immediately. While nginx will cut the connection off from nginx unit immediately, the backend script will just keep ruuning thousands of database operations. Hopefully this feature is implemented at some point. Until then... ban per IP/subnet/ASN :)

teodorescuserban commented 1 month ago

I guess what I am trying to say is that it would be great if nginx unit would send some signal (e.g. HUP or USR2) to the backend script and let it figure it out what needs to be done to be stopped. That should make any script cleanup solution work timely and precise.

jeffdafoe commented 1 month ago

@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time.

Did it work in your case?

It stops the script but the query continues to execute on the database.

teodorescuserban commented 1 month ago

@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time. Did it work in your case?

It stops the script but the query continues to execute on the database.

You need to make the script abort whatever queries before shutting down and you should be okay.

malsatin commented 1 month ago

In my experiment this ini parameter didn't stop the script and it executed until the query was aborted in the DBMS.

jeffdafoe commented 2 weeks ago

It depends on whether your PHP is compiled with or without threads support, apparently. See https://github.com/php/php-src/issues/14769 .