brettwooldridge / HikariCP

光 HikariCP・A solid, high-performance, JDBC connection pool at last.
Apache License 2.0
19.65k stars 2.9k forks source link

Connections not going back to pool #2148

Open slbejaranom opened 7 months ago

slbejaranom commented 7 months ago

Hello, I have this hikari configuration

logging.level.com.zaxxer.hikari.HikariConfig=DEBUG
logging.level.com.zaxxer.hikari=TRACE
spring.datasource.hikari.maximumPoolSize=60
spring.datasource.hikari.minimumIdle=5
spring.datasource.hikari.idleTimeout=900000
spring.datasource.hikari.leakDetectionThreshold=300000

And I'm using SpringBoot 3 for this, I have a repository and few DAO's and DTO's.

The problem is the following, this spring 3 app uses multithreading with "ThreadPoolTaskExecutor" whose configs are this

service.async.pool.write.core.size=25
service.async.pool.write.max.size=25
service.async.pool.write.queue.size=5000
service.async.pool.write.name.prefix=service-async-pool-
service.async.pool.write.execution.timeout=PT120S

In my mind a maximumPoolSize of 60 is even too big for this but some how all the conectios are going to ACTIVE state, all threads are not freeing the connections

image

This is one of the methods, it's just a bunch of select, write, and update but all that follows the same pattern.

Don't know what's going on, seems like a bug to me.

PS: I know obvious answer would be "Increase pool size" but seems like a matter of time til it fails again.

Nek-12 commented 7 months ago

I can attest, hikari leaks all of the connections that were opened into an active state for us.

Nek-12 commented 7 months ago

2152 and #2151 also seem to be related

lfbayer commented 7 months ago

At face value this doesn't look like a HikariCP issue. The example code snippet has no explicit call directly to a HikariCP data source. Neither a getConnection call nor a connection.close call.

I'm not personally familiar with the framework or the other classes in your code so I cannot judge what is going wrong.

You link to the double close issue, but an extra close is a no-op and absolutely wouldn't cause a leak. And you link to the virtual thread ticket, but I don't see any mention of using virtual threads in this ticket.

If you are able to prove that a call to close on a HikariCP connection doesn't return it to the pool, the you have found a bug, but I strongly suspect the issue here is that close isn't even being called. That would make this an issue outside of HikariCP.

Nek-12 commented 6 months ago

Yes, the double close is not related, pardon me for mislinking

It is unclear what causes the connections to be leaked, but I'm pretty sure it's either: a postgres driver update java update to 21 hikari update

No significant changes were made in our code since the upgrade, which led me to believe our ORM framework could be the cause. I've filed an issue there as well and it's being investigated. However, downgrading the ORM framework did not resolve the issue, which means it's not the orm framework (maybe?). So the only other cause I could identify was hikari. And there is already a very similar issue from another person, leading me to believe this is a bug and not some other issue with our code.

Nek-12 commented 6 months ago

I can confirm that going back from 5.1.0 to 5.0.1 immediately fixed the issue

theigl commented 6 months ago

We also ran into connection issues with 5.1.0 that were resolved by downgrading back to 5.0.1.

@slbejaranom could you check if downgrading to 5.0.1 fixes your issue as well?

lfbayer commented 6 months ago

If you are able to easily reproduce the issue, can one of you do a git bisect of HikariCP between 5.0.1 and 5.1.0 to see specifically which commit causes the issue to appear?

theigl commented 6 months ago

Most commits since 5.0.1 seem harmless except for this one c2148006ceff26085bd731b650b9a5a3ef0e45f6.

Unfortunately we can’t reproduce the issue easily. We hit the issue in production and reverted immediately.

slbejaranom commented 6 months ago

Hello, this leaking connection was caused by a nonclosure in an IdentifierGenerator, it was more a code issue !

Nek-12 commented 6 months ago

Can you reopen this please? Other people are experiencing this issue

slbejaranom commented 6 months ago

Reopened on request

CostGranda commented 4 months ago

I can confirm the same Issue, with this configuration:

 "HikariPool-1 - configuration:"}
 "allowPoolSuspension................................false"}
 "autoCommit................................false"}
 "catalog................................none"}
 "connectionInitSql................................none"}
 "connectionTestQuery................................none"}
 "connectionTimeout................................30000"}
 "dataSource................................none"}
 "dataSourceClassName................................none"}
 "dataSourceJNDI................................none"}
 "dataSourceProperties................................{password=<masked>, reWriteBatchedInserts=true, preparedStatementCacheQueries=512, preferQueryMode=extendedCacheEverything}"}
 "driverClassName................................"org.postgresql.Driver""}
 "exceptionOverrideClassName................................none"}
 "healthCheckProperties................................{}"}
 "healthCheckRegistry................................none"}
 "idleTimeout................................600000"}
 "initializationFailTimeout................................1"}
 "isolateInternalQueries................................false"}
 "jdbcUrl................................jdbc:postgresql://NO_RELEVANT"}
 "keepaliveTime................................0"}
 "leakDetectionThreshold................................60000"}
 "maxLifetime................................150000"}
 "maximumPoolSize................................10"}
 "metricRegistry................................none"}
 "metricsTrackerFactory................................none"}
 "minimumIdle................................10"}
 "password................................<masked>"}
 "poolName................................"HikariPool-1""}
 "readOnly................................false"}
 "registerMbeans................................false"}
 "scheduledExecutor................................none"}
 "schema................................"ope""}
 "threadFactory................................internal"}
 "transactionIsolation................................default"}
 "username................................"postgres""}
 "validationTimeout................................5000"}

HikariCP 5.0.1 Springboot 2.7.14

I have tried the most recent version but It doesn't work either.

liuzhilong62 commented 1 month ago

Hi,I also encountered the same problem.My postgreSQL database ACTIVE sessions reached 800+ in 2 minutes. hikari with 70+ nodes:

version:3.4.5
maxPoolSize:20
minIdle:3
connectionTimeout:3000
validationTimeout:2000
maxLifetime:900000

postgres :

version:13.x
max_connection=1000
idle_in_transaction_session_timeout=2h

Sry for the low version.I adjusted some parameters of the database, but I'm not sure if the problem will happen again. Is there a bug?Or can someone help me locate the code. thx~

wanghuizuo commented 4 weeks ago

I'm having the same problem, I can reproduce the problem by turning down the connection pool, but I haven't found a solution, downgrading HikariCP to 5.0.1 is also having the problem. Environment: springboot3.3.0 jdk21 postgre14 Parameters to reduce connection pool:

maxPoolSize:2
minIdle:1