MagicStack / asyncpg

A fast PostgreSQL Database Client Library for Python/asyncio.
Apache License 2.0
7.02k stars 403 forks source link

Share connections from a pool with different databases in the same PG server #368

Open carlopires opened 6 years ago

carlopires commented 6 years ago

When we are using TLS connections to a PG server the cost to keep opened multiple connections to different databases is high. So I'd like to implement database switching to share the connections from a pool with different databases.

pool = await asyncpg.create_pool(database='postgres')

async def handle(queries):
    """Handle incoming queries."""

    results = defaultdict(list)

    for dbname, dbqueries in queries.items():
        async with pool.acquire() as connection:

            connection.set_database(dbname)  # I want this !!!

            async with connection.transaction():
                for query in dbqueries:
                    result = await connection.execute(query)
                    results[dbname].append(result)

Is it feasible with asyncpg?

carlopires commented 6 years ago

Maybe we need just to change pool.acquire() to work as following:

async with pool.acquire(dbname=mydb) as connection:
    ...

If dbname is passed to pool.acquire() it switches the database before returning.

elprans commented 6 years ago

The solution is to use multiple pools, one per database. You can wrap them with a context manager object that accepts the database name and picks the correct pool.

carlopires commented 6 years ago

Maybe I did not express my problem correctly...

I have thousands of databases per server and I don't want to keep that same number of TLS connections opened for each database. What I want is to have a max of 100 TLS connections opened to the server and be able to run queries to different databases.

elprans commented 6 years ago

The PostgreSQL protocol doesn't work like that. You always connect to a specific database. If you need a different database, you have to make a new connection.

carlopires commented 6 years ago

It would be great if PG protocol support reusabilty of TLS connections and enable clients to switch the database for an established connection. The lack of this imposes a big penalty in servers running multi-tenant apps. For databases deployed in cloud (like AWS) it is even worse, because we cannot proxy the connections from the database server.

Do you have recommendations on how to serve thousands of queries coming to different databases running in the same server without killing the performance? (keeping a high number of connections opened in server side costs OS resources)

elprans commented 6 years ago

It would be great if PG protocol support reusabilty of TLS connections and enable clients to switch the database for an established connection.

It's not just the protocol, it's the entire execution model. The "backend" process in PostgreSQL always represents a connection to a specific database.

Do you have recommendations on how to serve thousands of queries coming to different databases running in the same server without killing the performance? (keeping a high number of connections opened in server side costs OS resources)

I don't think there exists a simple solution to this problem. You might want to look at postgres_fdw as a way to dispatch queries to multiple databases via a single database.