Closed chirkin closed 6 years ago
In what environment are you using pgmoon? It doesn't include a connection pool implementation. If you're using openresty then you can use the size
parameter with the keepalive method: https://github.com/openresty/lua-nginx-module#tcpsocksetkeepalive
Yes, I'm using openresty environment. I tried size parameter on nginx socket, but it's not working, over size connections is not discarded. Need to create some tests for you?
I was under the impression that size
prevented the number of connections from dropping below a certain number in the pool—the pool size—not limit the total connections to the server. To limit the total connections to the server, I would think you would put a limit in your PostgreSQL's postgresql.conf.
max_connections = 20
Not very fine-grained, I admit. Alternatively you could install a connection pooler like pgbouncer and using max_db_connections. This would allow your code to connect at will but prevent overloading the database with open sockets.
I change the pgmoon/init.lua file, function keepalive: keepalive = function(self, ...) local sock = self.sock self.sock = nil return sock:setkeepalive(50, 5) --return sock:setkeepalive(...) end, so far we test good to imit the number of connections.
In truth, just put "lua_socket_pool_size" to your nginx.conf
in my postgresql.conf: max_connections = 1024
in my nginx.conf worker_processes 2;
in my test.lua file: pg:keepalive(300000, 500)
when I use ab to test: ab -n 100000 -c 1500 -s 300 -k 127.0.0.1/test
postgresql reports many below messages in log file: sorry, too many clients already remaining connection slots are reserved for non-replication superuser connections
add: lua_socket_pool_size 500; to nginx.conf cann't resolve this problem
lua_socket_pool_size
does not resolve the problem as @ttfkam pointed out. The pool size is the numer of connections kept open. It will not block new connections from being created. pgbouncer is probably the most reliable solution, otherwise you can keep track of connection count yourself in a shared dict and reject new connections (or sleep) if connection count is larger than what you expect.
I don't think limiting number of connections is in the domain of this library, so I'm going to close the ticket.
High loads and slow db responses can generate a large number of connections. There should be a limit of simultaneous db connections.