Closed CaryGuo closed 2 years ago
It's not clear to me what your question is, but here's some possibly relevant information.
PHPC-1645, which was released in version 1.10.0 of the extension, introduced a disableClientPersistence
option to the third $driverOptions
argument to MongoDB\Driver\Manager::__construct()
. Driver options can also be passed to MongoDB\Client::__construct()
if you're using the library instead of the extension directly.
When enabled, this option prevents the extension from persisting the internal libmongoc client (and its sockets), which means any resources will be freed when PHP frees the Manager instance. This allows you to override the default persistence behavior discussed in Connection handling and persistence.
MongoDB 5.0 and the corresponding driver releases introduced support for connecting to mongos
instances behind a layer 4 load balancer. This functionality was implemented in PHPC-1752 and was released in version 1.11.0 of the mongodb
extension.
This feature allows the driver to bypass server discovery and monitoring by connecting to a single load balancer instead of a MongoDB cluster (e.g. entire replica set, a few mongos hosts).
There is little, if any, public documentation for this feature, as it is primarily used by Atlas Serverless; however, it is possible to manually configure a load balancer in front of your own sharded cluster, which is what drivers do for their own CI testing. The implementation (as it concerns drivers) is discussed in some detail in the Load Balancer specification, and you may be able to glean information from the following resources from our CI configuration:
test-loadBalanced
task in our Evergreen CI configuration. Note that a task is basically a series of shell commands, which are either inlined within the configuration file or contained in separate scripts located in either the driver's .evergreen/
directory or the similarly named directory in mongodb-labs/drivers-evergreen-tools.run-load-balancer.sh
script, which is used to start a layer 4 load balancer. We use HAProxy for internal testing.run-orchestration.sh
script, which respects a LOAD_BALANCED
option and selects a particular mongo-orchestration configuration (e.g. configs/sharded_clusters/basic-load-balancer.json
).For added context, mongo-orchestration is just a tool we use to start MongoDB clusters from pre-written configuration files. It's not something you'd use in production, but by examining the config files you can infer what options would be necessary to configure your own cluster (e.g. loadBalancerPort
server parameter on the mongos hosts).
Prior to introducing support for layer 4 load balancers, mongos was the most common way to limit connections in an application. Typically, each app server would have one mongos on it, and that would front all connection activity to the backing mongod hosts. If you are running 10,000 FPM workers on a single app server, I would think that a single mongos host on the same server could still handle those connections, barring any memory or file descriptor limitations. That said, deployment analysis is well beyond the scope of a PHP bug report, so if that's necessary I would suggest reaching out for professional support/consultation services.
Our Web service has a large cluster, keeped about 10000 fpm workers. And now, we try to introduce mongodb to bear a low frequency function. Beacause the function is low frequency and single, so the mongodb cluster is small. But when 10000 connection was built, mongodb cannot working at all...
We can really build a connect pool service, or separate this small function, but it just a low frequency function, it makes us payed so much........