Closed byronpc closed 4 years ago
We should have one query per connection, so 1000 is way above our expectations. The cache is also per connection, so concurrency within Elixir should not be an issue, unless the issue comes from how we concurrently prepare the queries against the database.
Just double-checking, are you running on latest version, myxql v0.3.4?
even more latest than 0.3.4. since you pushed a hot fix last week with regards to the leakage
oops sorry. rather i'm using a custom commit you did last week. so it's essentially the same
Oh, it is an Ecto query cache issue. We are racing Ecto's cache.
This is probably an edge case since it requires that the statement to be sent is not prepared in the first place and have multiple processes execute the same query to the process. If the statement has already been prepared prior then it's good.
Ok. I wrote an issue last week with regards to identical prepared statements keep growing when you send an erroneous statement via
query
.I'm now replicating an issue wherein non erroneous queries can also cause multiple identical prepared statements in the db.
In order to replicate the issue. I start up the service with just a pool of 2 connections.
In order to confirm this
Now I ran 1000 parallel commands to Repo by
I confirmed from the processlist that there are still 2 connections to the pool (to make sure there is no misconfiguration) after the queries have been sent
But when I checked the prepared_statement_instances, 1000 instances has been created
To me, there seems to be a race condition somewhere in the driver which causes it to prepare the statement again even if it has already been prepared prior which is weird because on poolboy, we checkout the connection and checkin it afterwards. So pretty sure if we have already prepared it before on one connection, it shouldn't need to be reprepared on the succeeding checkouts.
This bug doesn't happen when you put an interval in between the queries which is understandable.
But on a live setup wherein it's very possible to have multiple requests running at the same time which can cause this type of leakage