Closed cjhenck closed 5 years ago
I guess you are right. I only had in mind the process of creating connections until reaching the upper limit, did not think about this aspect. You are welcome to submit a pull request for this issue. Thanks for spotting this one!
I may do that - I fixed the bug on our repo, but also added an option to provide multiple connection arguments to do a simple round-robin against all of the cluster endpoints and to add some level of failure-recovery if one of the endpoints is down. It needs test cases, and I'm not sure if you want all that functionality in there as well.
I'd rather not include the multiple connection feature, as I believe remodel should be a simple library and this will deviate a bit from my initial concept. Just as curiosity, why do you need extra connections for cluster endpoints? From my understanding, RethinkDB clusters work as expected by connecting to a single endpoint and running the queries against it.
Nonetheless, it would be great to include your fix in the next remodel release, together with a handful of tests!
My thinking was twofold:
That said, there are a number of alternatives:
I think for the purposes of the library, a load balancer probably would fit the bill philosophically.
I'll go back to my patch and make a few changes/tests.
I didn't invest that much effort in the connection pool, but I see some very valid points raised by you and in the issue you've linked to. If you put together a useful and working snippet, I will be more than happy to integrate it!
Cool. I'll try to finish it up this week or early next!
I was able to do more investigation of this as well. It seems like RethinkDB supports running a local instance in cluster proxying mode, which I think is a better solution than adding a round robin. I still need to get to the patch.
Closing this one for now. Feel free to reopen the issue if you managed to find time for this!
It seems to me that the connection pool will not honor the maximum connections limit, because every time a connection is put back into the queue, the count is decremented. So take the following scenario:
max_connections
is 5_created_connections.current()
is now 5_created_connections.current()
is now 0_created_connections.current()
is now 0