Open Appla opened 1 year ago
I have been doing some debugging and this is really an issue. Basically fastcgi_finish_request
is pretty much useless when used with keepalive for busy servers. It will return the response sooner but it will introduce start delay for the next request because nginx thinks that it can send other request immediately after. Also as you show here, it introduces dead lock if calling itself. Same reason as nginx thinks that it can send other request so it re-uses connection.
I have been thinking about possible solutions and I can't see a way how to easily address it without closing the connection. Basically it would either require some sort of forking which is most likely more expensive than closing the connection or it would require ZTS which is not usual for FPM and probably not a good idea either due to complexity.
In terms of closing the connection, I'm not sure if it should be done in the function as it all depends on server / FPM configuration so maybe we should just add option that will control if the connection should be always closed after fastcgi_finish_request
which might even make sense to be enabled by default as this behaviour is not what users expect. I think no one is expecting fastcgi_finish_request
to introduce delay for the next request on busy servers. We should also update docs.
In terms of closing the connection, I'm not sure if it should be done in the function as it all depends on server / FPM configuration so maybe we should just add option that will control if the connection should be always closed after
fastcgi_finish_request
which might even make sense to be enabled by default as this behaviour is not what users expect. I think no one is expectingfastcgi_finish_request
to introduce delay for the next request on busy servers. We should also update docs.
The issue arises when keepalive is used alongside fastcgi_finish_request
.
To address this, I propose adding a new optional parameter to fastcgi_finish_request
that defaults to closing the connection.
This approach not only resolves the problem but also gives users the flexibility to modify the default behavior if needed. Introducing a new configuration option to control this default behavior seems reasonable.
The problem is that it requires changes to code for something which is more an operational thing. To me it makes sense to give administrator control over this which is essentially configuration option that effects every fastcgi_finish_request call.
The problem is that it requires changes to code for something which is more an operational thing. To me it makes sense to give administrator control over this which is essentially configuration option that effects every fastcgi_finish_request call.
Sure, I have just introduced a new option as the default value for the fastcgi_finish_request
parameter to control connection closure.
Description
The following code:
with FPM settings:
and NGINX settings:
Resulted in this log:
But I expected this output instead:
This is easy to reproduced by the above desc, IMHO the log is clearly show’s the current PHP‘s behavior is something error. the reason I think is after proc-X called
fastcgi_finish_request
this connection is marked reuseable by proxy and if the same process soon req same host the proxy might select the same connection then deadlock happens.this also can be showing by GDB/tcpdump
so I made this https://github.com/php/php-src/pull/10273
PHP Version
7.4-8.2
Operating System
Ubuntu 20.04