Open jameshartig opened 5 years ago
After digging some more it looks like this might be intentional:
// Don't ever hit the pool limit for syncing
config := cluster.dialInfo.Copy()
config.PoolLimit = 0
This is unfortunate because it doubles the number of connections that we think we're making to the primary. Can an option be added to make the pool limit be a hard limit and if the pool is full/used then fail the sync?
Hi @fastest963 ,
Thanks for taking the time to report this issue. We are happy to review pull requests, so feel free to send one with the changes to address the problem with the pool limit.
Best regards, Oscar
Hi @fastest963
Is this actually creating the connections, or could it be related to https://github.com/globalsign/mgo/pull/329? It might be worth trying with @KJTsanaktsidis's branch.
Dom
I've already submitted a bugfix branch to solve this problem, please review #373
Despite setting
PoolLimit: 1
, the mgo driver proceeds to make 2 connections to the primary.What version of MongoDB are you using (
mongod --version
)?What version of Go are you using (
go version
)?What operating system and processor architecture are you using (
go env
)?What did you do?
Setup a 3 member replica set with 1 primary, 1 secondary, 1 arbiter.
Run
Where
127.0.0.1:27017
is an address to a mongo node in the replica set.It'll starting printing out:
Until after 30 seconds it'll start printing out:
Now there's 3 sockets alive, 2 to the primary and 1 to the secondary.
Here are the debug logs from running that with the initial address being
unity.node.gce-us-central1.admiral:27017
:Can you reproduce the issue on the latest
development
branch?Yes, the same thing happens.