Open bashkarev opened 4 years ago
Hello,
idle_timeout
and send_timeout
does not limits the request processing time. Use timeout
option in application limits
group to limit client wait time (see Request Limits).Are you planning to add signal controls? for :
Yes
Are you planning to add signal controls? for :
- frozen process
- client disconnection
Hopefully soon. Thank you!
@mar0x, I tried to find out what exactly means "Currently Unit tries only to stop applications gracefully" to make sure I change the application so it will actually stop. This would be great to have until you guys are able to implement the kill.
Thanks!
@teodorescuserban Hi, I'll work on a more detailed explanation of what exactly happens there.
@teodorescuserban Hi, I'll work on a more detailed explanation of what exactly happens there.
@artemkonev any news on this? Thank you!
Hi @teodorescuserban , I'm about to post an update in a week or two.
@teodorescuserban hi! Seems the answer is, unfortunately, 'not much.' I added what little details there are here:
@artemkonev I am sorry to hear there is no way to notify the worker that unit is cancelling the request. I guess the worker will have to manage its subsequent processes and make sure it doesn't go over the unit timeout threshold. Hopefully in the future unit will send some signal to the worker and in that case it will be up to the worker to react on it and kill whatever child it has. Thank you for looking into this.
Hi, are there any standard solution for that problem?
I understand, that it seems obvious that you should just "Try to avoid infinite loops for now", but imagine, that the problem is not in "infinite loops", but in the infinitely hanging SQL requests.
For example our configuration
"admin": {
"type": "php",
"root": "/var/www/app/backend/web/",
"script": "index.php",
"processes": 8,
"limits": {
"requests": 10000,
"timeout": 30
}
}
In our case
503
status code to the client after 30 secondsIt would be perfect to somehow be able to force kill such hanging processes.
Set a max_execution_time in your php.ini .
@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time.
Did it work in your case?
@malsatin
From here
Note: The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.
So it looks like it won't abort your database query...
If your using MySQL (I guess this also applies to MariaDB and other DBs may have similar) then it seems you can do something like
SELECT
/*+ MAX_EXECUTION_TIME(30000) */
*
FROM table;
The timeout value is in milliseconds.
Thanks, @ac000, great point!
We will check out that feature.
The solution for now, as far as I was able to help implement on our side, is for the backend script (in our case a python api) to have its own timeout and if reached, make sure it does the cleaning (kill the child processes, abort connections to the database etc). This is not an easy task and depends on what you have in the backend. Unfortunately, because we are trying to have a decent timeout (for the heavier parts of the api), we would have to wait longer to do the cleanup for every failed closed connection made from the client to the nginx unit. This at least seems to make the backend script vulnerable to DoS when you make a lot of requests and close them immediately. While nginx will cut the connection off from nginx unit immediately, the backend script will just keep ruuning thousands of database operations. Hopefully this feature is implemented at some point. Until then... ban per IP/subnet/ASN :)
I guess what I am trying to say is that it would be great if nginx unit would send some signal (e.g. HUP
or USR2
) to the backend script and let it figure it out what needs to be done to be stopped. That should make any script cleanup solution work timely and precise.
@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time.
Did it work in your case?
It stops the script but the query continues to execute on the database.
@jeffdafoe We have tried, it didn't help - the process is still executes after duration, specified in max_execution_time. Did it work in your case?
It stops the script but the query continues to execute on the database.
You need to make the script abort whatever queries before shutting down and you should be okay.
In my experiment this ini parameter didn't stop the script and it executed until the query was aborted in the DBMS.
It depends on whether your PHP is compiled with or without threads support, apparently. See https://github.com/php/php-src/issues/14769 .
Not terminated php process after timeout,
conf.json
index.php