Closed Slind14 closed 3 years ago
Hi, this is probably due to the type of resolver libcurl was built with, i.e. threaded resolver.
Is there anything I could do to reduce this? I had to dump open files immensely.
I guess you could build libcurl with the threaded resolver disabled, or presumably better, with the async resolver (c-ares) instead.
So kinda like this? https://stackoverflow.com/a/41986646/2693017 I have never done anything like it yet.
How did it go?
I haven't found the time to do it yet.
Thinking again about it, the threaded resolver should have been already in use with ext/curl, so it seems unlikely the threaded resolver is the actual problem.
Where's the data from the graphs coming from? What's the big drop in the second one?
The data is from https://amplify.nginx.com/ The drop in the second one is from switching from php curl to pecl. (the php curl one doesn't reuse connections and didn't close them either)
Hm, sorry, still doesn't make a lot of sense to me...
max_child: The number of times, the process limit has been reached.
Okay, so the drop was just due to restart.
Don't hesitate to reopen when there are new insights.
Hi, we migrated from php's curl implementation to pecl-http in order to be able to reuse existing connections and avoid hitting limitations there.
With the migration we experienced a five fold increase of current processes. Are we missing something?
(Migration happened on the 11th) Requests stayed (more or less) the same: