aio-libs / aiomysql

aiomysql is a library for accessing a MySQL database from the asyncio
https://aiomysql.rtfd.io
MIT License
1.77k stars 257 forks source link

how does aiomysql know to reuse a mysql pool? #154

Open ihjmh opened 7 years ago

ihjmh commented 7 years ago

in my programe,i made a global engine by store it into a list,

however when i use the ab to test a simple mysql proxy server with : ab -n 10000 -c 1000 http://localhost and the proxy server has 2 worker on a 2cpu machine

i got a very strange result ,cause i set the connect pool (min 10 ,maxsize 20), and run the ab first ,i check the mysqldb the connection is 2, when i run it again and again ,i increase to 40 by the end ,and never grow up

i was wondering why is this result? how can i control all my connections?

jettify commented 7 years ago

by workers you mean 2 distinct processes with own loop each? If so 40 connections is correct number, it is not possible to share pool between processes.

ihjmh commented 7 years ago

i know that,but i wanna know why the connections added up by 2 when i run the ab ,

in the first time ab run, 2 connections can deal all the request,why the next time the sanic won't reuse the older connections? it have to reopen another pool for each worker until all the 40 workers run out? thank u for reply me

jettify commented 7 years ago

Could you show basic example? I have never used sanic, but I looked examples in sanic repo (like this one (https://github.com/channelcat/sanic/blob/b3b27cab34c73bd448f66ad098848f410031278f/examples/sanic_aiopg_example.py) they are wrong, since they create connection pool on each request, this is very very inefficient and ruins very purpose of pool.

jettify commented 7 years ago

By global engine you mean pool crated globally and reused between handlers? Pool created before first request?

On Wed, Mar 8, 2017, 04:56 Damon notifications@github.com wrote:

i know that,but i wanna know why the connections added up by 2 when i run the ab ,

if in the first of all,2 connections can deal all the request,why the next time the sanic won't reuse the older connections? it have to reopen another pool for each worker conections until all the 40 workers run out? thank u for reply me

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/aio-libs/aiomysql/issues/154#issuecomment-284931568, or mute the thread https://github.com/notifications/unsubscribe-auth/AANoZxTuMP-L0g1Si-VzDexXhUXyLX8Cks5rjhjLgaJpZM4MVFy5 .

ihjmh commented 7 years ago

NO, i know they are wrong ,i didn't use that example. here is my pseudocode

`
import sanic
import aiomysql
....
global_engine=''
def getEngine():
       engine =await create_engine(...min_size=10,max_size=20)
       global global_engine
       global_engine=engine

def index(request):
      if global_engine:
         async with global_engine.acquire() as conn:
                 do_something()

app_run(...,worker=2,before_start=getEngine)
`

it's very intresting that i run ab test(ab -n 10000 -c 1000 http://localhost) the quantity of connections increase 2 each time until goes to the top of 40 my real problem is why does it can deal the first ab test just use 2 connections,by the end of the test,it have to use 40 connections? it obviously not use all the 40 connections for deal one ab test

ihjmh commented 7 years ago

oh my god, shouldn't i ask this problem under sanic issues,do i? it obvious a problem of sanic ,about how to control the connections

jettify commented 7 years ago

I think it is proper place.

In your case -c 1000 so you have 1000 concurrent connections, also they need some time to ramp up. those connections multiplexed to 40 mysql connection, looks right to me. Initial pool size is 10 conn each?

jettify commented 7 years ago

Also we rotate connections, so they have less changes to timeout.

ihjmh commented 7 years ago

@jettify bro ,i know aiomysql gonna rotate the connections.

In fact,i wanna know why in the first 2 connections can deal 1000 concurrency, in the end it need 40 connections ? That is a very serious problem, cause i cannot control or close part of the free connections, i trying to use the below code to close the free connections, but it doesn't work at all,i have to keep the 40 connections unless i close the engine

'with engine.acquire() as conn:
        conn.close()' 
wzyonggege commented 5 years ago

@ihjmh I also use "global" value to reuse aiomysql pool in Tornado app, do you have another way better that can aviod using "global"?

NO, i know they are wrong ,i didn't use that example. here is my pseudocode

`
import sanic
import aiomysql
....
global_engine=''
def getEngine():
       engine =await create_engine(...min_size=10,max_size=20)
       global global_engine
       global_engine=engine

def index(request):
      if global_engine:
         async with global_engine.acquire() as conn:
                 do_something()

app_run(...,worker=2,before_start=getEngine)
`

it's very intresting that i run ab test(ab -n 10000 -c 1000 http://localhost) the quantity of connections increase 2 each time until goes to the top of 40 my real problem is why does it can deal the first ab test just use 2 connections,by the end of the test,it have to use 40 connections? it obviously not use all the 40 connections for deal one ab test

wtdcode commented 5 years ago

Same problem here.

The connections slowly increase to the maxsize and acquire() stuck. At the same time, I can see lots of connections in SLEEP state by show full processlist.

terricain commented 5 years ago

@wtdcode do you have a snippet that will cause this to reoccur