Closed JohnMerlino1 closed 9 years ago
Did you find the cause?
I'm curious about the resolution to this too. We're experiencing a similar timeout issue and don't know the cause. I'm hoping commit 4c23861 fixes it.
It could be that, the bug generally only reared it's head when the pool is over-sized and timeouts (idle and max lifetime) are not properly set.
@whaleyr @JohnMerlino1 A race condition was discovered in 1.4.0 that was causing this timeout to occur (a connection awaiting a connection would not be woken up). It has been fixed (2fb08a4) in release 2.0.1.
We are on "com.zaxxer" % "HikariCP-java6" % "2.0.1"
and we are seeing this:
java.sql.SQLException: Timeout of 30000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207) ~[com.zaxxer.HikariCP-java6-2.0.1.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93) ~[com.zaxxer.HikariCP-java6-2.0.1.jar:na]
at scala.slick.jdbc.JdbcBackend$DatabaseFactoryDef$$anon$3.createConnection(JdbcBackend.scala:47) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn$lzycompute(JdbcBackend.scala:310) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn(JdbcBackend.scala:310) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.jdbc.JdbcBackend$BaseSession.close(JdbcBackend.scala:324) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.backend.DatabaseComponent$DatabaseDef$class.withSession(DatabaseComponent.scala:31) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.jdbc.JdbcBackend$DatabaseFactoryDef$$anon$3.withSession(JdbcBackend.scala:46) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.backend.DatabaseComponent$DatabaseDef$class.withDynSession(DatabaseComponent.scala:39) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
at scala.slick.jdbc.JdbcBackend$DatabaseFactoryDef$$anon$3.withDynSession(JdbcBackend.scala:46) ~[com.typesafe.slick.slick_2.10-2.0.2.jar:2.0.2]
It is unlikely to be the same cause as this issue. Are you sure this is not a real timeout (all connections are busy)? I have two suggestions:
Report what you find here.
I too see this. In my case, I use Hikari with Hibernate, and have 2 session factories and 2 pools pointing to 2 different databases. The first one starts up fine - Hibernate acquires a connection to get database metadata. The second one doesn't - times out attempting to acquire a connection to the second database. Note that this is before the application starts handling any requests - this is just booting up Hibernate. I've played with different timeouts - ranging from 1,000 to 120,000 millis - so it doesn't seem like just a delay establishing connections. Some other maybe-relevant details:
I grabbed a thread dump, here are the relevant threads:
"HikariCP connection filler" #25 daemon prio=5 os_prio=31 tid=0x00007f9cfe1fb000 nid=0xda03 waiting on condition [0x00000001310ac000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.zaxxer.hikari.util.PoolUtilities.quietlySleep(PoolUtilities.java:75)
at com.zaxxer.hikari.pool.HikariPool$1AddConnection.run(HikariPool.java:317)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"Hikari Housekeeping Timer (pool balance_0)" #24 daemon prio=5 os_prio=31 tid=0x00007f9cfe341800 nid=0xd803 waiting on condition [0x0000000130fa9000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c0e73960> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"HikariCP connection filler" #23 daemon prio=5 os_prio=31 tid=0x00007f9cfc3f3800 nid=0xd603 waiting on condition [0x0000000130c59000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.zaxxer.hikari.util.PoolUtilities.quietlySleep(PoolUtilities.java:75)
at com.zaxxer.hikari.pool.HikariPool$1AddConnection.run(HikariPool.java:317)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"HikariCP connection filler" #19 daemon prio=5 os_prio=31 tid=0x00007f9cfca12000 nid=0xcd03 waiting on condition [0x0000000130ea6000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.zaxxer.hikari.util.PoolUtilities.quietlySleep(PoolUtilities.java:75)
at com.zaxxer.hikari.pool.HikariPool$1AddConnection.run(HikariPool.java:317)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"Hikari Housekeeping Timer (pool balance)" #18 daemon prio=5 os_prio=31 tid=0x00007f9cfd992000 nid=0xcb03 waiting on condition [0x0000000130da3000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c0b90b48> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"HikariCP connection filler" #17 daemon prio=5 os_prio=31 tid=0x00007f9cfd0f6000 nid=0xc903 waiting on condition [0x0000000130b56000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.zaxxer.hikari.util.PoolUtilities.quietlySleep(PoolUtilities.java:75)
at com.zaxxer.hikari.pool.HikariPool$1AddConnection.run(HikariPool.java:317)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"main" #1 prio=5 os_prio=31 tid=0x00007f9cf980d000 nid=0x1903 waiting on condition [0x000000010c7bb000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c0ebc070> (a com.zaxxer.hikari.util.ConcurrentBag$Synchronizer)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedLongSynchronizer.doAcquireSharedNanos(AbstractQueuedLongSynchronizer.java:815)
at java.util.concurrent.locks.AbstractQueuedLongSynchronizer.tryAcquireSharedNanos(AbstractQueuedLongSynchronizer.java:1106)
at com.zaxxer.hikari.util.ConcurrentBag.borrow(ConcurrentBag.java:146)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:169)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
at com.my_custom_connection_provider.SquareHikariConnectionProvider.getConnection(SquareHikariConnectionProvider.java:163)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcServicesImpl.java:279)
at org.hibernate.engine.jdbc.internal.JdbcServicesImpl.configure(JdbcServicesImpl.java:124)
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.configureService(StandardServiceRegistryImpl.java:111)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:234)
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:206)
at org.hibernate.cfg.Configuration.buildTypeRegistrations(Configuration.java:1885)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1843)
(snipped)
To clarify, our stack trace is coming from using slick in a Scala Play app. And, we also have two databases configured with each having their own hikari pool data sources.
Here's a test that recreates this: https://gist.github.com/maniksurtani/14b755f3fe67d3841f49
In a loop of running the test 5000 times, I've seen it fail 5 times:
java.sql.SQLException: Timeout of 1000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
at HikariTest.main(HikariTest.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
java.sql.SQLException: Timeout of 1000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
at HikariTest.main(HikariTest.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
java.sql.SQLException: Timeout of 1000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
at HikariTest.main(HikariTest.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
java.sql.SQLException: Timeout of 1000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
at HikariTest.main(HikariTest.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
java.sql.SQLException: Timeout of 1000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
at HikariTest.main(HikariTest.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Success rate 4995/5000 in 134404 millis
Ok, a couple of things here. The testcase linked in the gist above is pretty far from typical usage -- creating and destroying pools at such a high rate. You are basically torturing MySQL, not HikariCP. Having said that, it succeeded on my box:
Success rate 5000/5000 in 136790 millis
But a 1000ms timeout under such load is a lot of ask of MySQL, so I'm not surprised if you had a few spurious failures.
Of course, this test is not at all how HikariCP would be used in the real world. You would create the two pools outside of the loop, and then obtain and close 5000 connections from each pool, like so:
HikariConfig cfg1 = configure(db1);
HikariConfig cfg2 = configure(db2);
HikariDataSource pool1 = new HikariDataSource(cfg1);
HikariDataSource pool2 = new HikariDataSource(cfg2);
for (int i = 0; i < RUNS; i++) {
try (Connection conn1 = pool1.getConnection(); Connection conn2 = pool2.getConnection();) {
successes++;
}
catch (Exception e) {
failures++;
e.printStackTrace();
}
}
pool1.close();
pool2.close();
}
When I do this, I get:
Success rate 5000/5000 in 694 millis
To further diagnose why the second connection pool is failing to start, set initializationFailFast
to true
. This will cause the application to fail immediately during startup if it cannot connect to the database, and will print the stacktrace reported by the driver.
Yes I realise the loop is contrived; I was using it to tease out an intermittent failure. Thanks for the tip re: initializationFailFast
- it does look like a (still intermittent) database connection failure. Some additional logging when addConnection()
fails may be handy in future - even in cases where initializationFailFast
is false
.
If you enable debug logging, addConnection() should log on failure. We used to log at a higher level, but some users complained that during database down scenarios the messages overwhelmed and rolled their logs.
In a Play project which has two sub-projects, we are trying HikariCP and it does not seem to release connections back to pool and throws the same exception. We are using two Postgres databases with Play-hikaricp 1.4.1 which uses HikariCP 2.0.1. After six simple queries:
[DEBUG] c.z.h.p.HikariPool - Before cleanup pool stats pool1 (total=5, inUse=5, avail=0, waiting=1)
[DEBUG] c.z.h.p.HikariPool - After cleanup pool stats pool1 (total=5, inUse=5, avail=0, waiting=1)
[DEBUG] c.z.h.p.HikariPool - Timeout failure pool stats pool1 (total=5, inUse=5, avail=0, waiting=0)
[ERROR] application - Error in datastore operation detected.
java.sql.SQLException: Timeout of 30000ms encountered waiting for connection.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:207) ~[HikariCP-java6-2.0.1.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93) ~[HikariCP-java6-2.0.1.jar:na]
at scala.slick.jdbc.JdbcBackend$DatabaseFactoryDef$$anon$3.createConnection(JdbcBackend.scala:47) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn$lzycompute(JdbcBackend.scala:397) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn(JdbcBackend.scala:397) ~[slick_2.11-2.1.0.jar:0.8.0]
[DEBUG] c.z.h.p.HikariPool - Before cleanup pool stats pool1 (total=5, inUse=4, avail=1, waiting=0)
[DEBUG] c.z.h.p.HikariPool - After cleanup pool stats pool1 (total=5, inUse=4, avail=1, waiting=0)
Something is not closing connections in a timely fashion. Can you enable leak detection to see who is holding the connections?
Thanks for your quick reply. Here's the result:
[WARN] c.z.h.p.LeakTask - Connection leak detection triggered, stack trace follows
java.lang.Exception: null
at scala.slick.jdbc.JdbcBackend$DatabaseFactoryDef$$anon$3.createConnection(JdbcBackend.scala:47) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn$lzycompute(JdbcBackend.scala:397) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn(JdbcBackend.scala:397) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$SessionDef$class.prepareStatement(JdbcBackend.scala:152) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.prepareStatement(JdbcBackend.scala:389) ~[slick_2.11-2.1.0.jar:0.8.0]
[WARN] c.z.h.p.LeakTask - Connection leak detection triggered, stack trace follows
java.lang.Exception: null
at scala.slick.jdbc.JdbcBackend$DatabaseFactoryDef$$anon$3.createConnection(JdbcBackend.scala:47) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn$lzycompute(JdbcBackend.scala:397) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.conn(JdbcBackend.scala:397) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$SessionDef$class.prepareStatement(JdbcBackend.scala:152) ~[slick_2.11-2.1.0.jar:0.8.0]
at scala.slick.jdbc.JdbcBackend$BaseSession.prepareStatement(JdbcBackend.scala:389) ~[slick_2.11-2.1.0.jar:0.8.0]
What does this mean? I noticed some other users in this discussion have had a similar issue with two databases/pools.
Please read the answer here.
Just for the record, closing the sessions manually solved the problem. Thanks.
Hello, Is this bug already fixed? We experiment the a very similar issue using HikariCP 2.2.5 + Hibernate 4.3.5 + Spring 4.0.6 with an SQL Server 2008 database. Our configuration looks like this:
<bean id="mainDataSource" class="com.zaxxer.hikari.HikariDataSource" destroy-method="shutdown">
<property name="driverClassName" value="${hibernate.connection.driver_class}" />
<property name="jdbcUrl" value="${hibernate.connection.url}" />
<property name="username" value="${hibernate.connection.username}" />
<property name="password" value="${hibernate.connection.password}" />
<property name="connectionTestQuery" value="${hibernate.pooling.connectionTestStatement}" />
<property name="idleTimeout" value="${hibernate.pooling.idleTimeout}" />
<property name="maximumPoolSize" value="${hibernate.pooling.maxPoolSize}"/>
<property name="poolName" value="ZEUS_APP_POOL"/>
</bean>
<bean id="dataSource" class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy">
<property name="targetDataSource" ref="mainDataSource" />
</bean>
With these pooling parameters: hibernate.pooling.connectionTestStatement = SELECT 1 hibernate.pooling.idleTimeout = 1800000 hibernate.pooling.maxPoolSize = 30
An the stack trace we get many times is the following:
org.hibernate.exception.GenericJDBCException: could not prepare statement at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126) ..... Caused by: java.sql.SQLException: Timeout after 30005ms of waiting for a connection. at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:208) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:108) at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.getTargetConnection(LazyConnectionDataSourceProxy.java:403) at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.invoke(LazyConnectionDataSourceProxy.java:376) at com.sun.proxy.$Proxy26.prepareStatement(Unknown Source) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:162) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:186) ... 81 more
Thank you in advance
We get the same error:
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1086) [tomcat-coyote.jar:8.0.18]
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:659) [tomcat-coyote.jar:8.0.18]
at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:223) [tomcat-coyote.jar:8.0.18]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1558) [tomcat-coyote.jar:8.0.18]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1515) [tomcat-coyote.jar:8.0.18]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.0.18]
at java.lang.Thread.run(Unknown Source) [na:1.8.0]
Caused by: java.sql.SQLTimeoutException: Timeout after 30001ms of waiting for a connection.
at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:228) ~[HikariCP-2.3.2.jar:na]
at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:183) ~[HikariCP-2.3.2.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:91) ~[HikariCP-2.3.2.jar:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:139) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:380) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:228) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
We do close session after its used. We use HikariCP 2.3.3, Hiberante 4.3.8, Spring 4.1.4, SQL 2012 and configuration for com.zaxxer.hikari.HikariDataSource:
<prop key="driverClassName">net.sourceforge.jtds.jdbc.Driver</prop>
<prop key="jdbcUrl">${jdbc.url}</prop>
<prop key="username">${jdbc.username}</prop>
<prop key="password">${jdbc.password}</prop>
<prop key="autoCommit">false</prop>
<prop key="connectionTimeout">${connection.hikari.timeout}</prop>
<prop key="idleTimeout">${connection.hikari.timeout}</prop>
<prop key="maxLifetime">${connection.hikari.max.lifetime}</prop>
<prop key="maximumPoolSize">${connection.hikari.pool.max.size}</prop>
<prop key="connectionTestQuery">SELECT 1</prop>
<prop key="poolName">${hikari.pool.name}</prop>
<prop key="initializationFailFast">false</prop>
<prop key="leakDetectionThreshold">${hikari.leak.detection.threshold.time.millis}</prop>
connection.hikari.timeout=30000
connection.hikari.max.lifetime=400000
connection.hikari.pool.max.size=64
hikari.leak.detection.threshold.time.millis=15000
We use this configuration for multiple databases, but this occurs only for one of them, where we have 3 pools for it (other database have 1). We have a lot of traffic, but I never noticed that more then 30-40 connections are used at same time per pool. Also, this is the only log, leakDetectionThreshold doesn't log anything.
@adnanisajbeg @sergi-mm As we always ask, can you turn on DEBUG level logging for the package com.zaxxer.hikari? How to do that will depend on your application server and logging framework.
What we are looking for is the pool conditions immediately preceding the connection issues. With logging enabled, HikariCP should log pool statistics every 30 seconds.
Ok I will turn it on and post here as soon as it happens again.
@adnanisajbeg @sergi-mm Do you have any errors in the SQL Server logs? And are you monitoring the SQL Server connection counts as well?
I've upgraded to HikariCP 2.3.3 and I've activated the following logging for the application: log4j.logger.HikariPool = DEBUG log4j.logger.com.zaxxer.hikari = DEBUG
I have deployed in Production just a few minuts ago so I are waiting for more information. As soon I get something, I will inform you.
Regards.
After upgrading HikariCP I couldn't reproduce the timeout problem for the moment, but I found something strange during some network problems: first we got almost all conections available, then when the network error occurs, exactly 4 connections remain always as "inUse" and they are not released. Here is the stack trace:
08:53:52,242 78614273 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=1, avail=29, waiting=0)
08:53:52,243 78614274 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=1, avail=29, waiting=0)
08:54:15,463 78637494 WARN (com.zaxxer.hikari.proxy.ConnectionProxy.checkException(ConnectionProxy.java:109)) - Connection net.sourceforge.jtds.jdbc.JtdsConnection@6d81af64 (ZEUS_APP_POOL) marked
as broken because of SQLSTATE(08S01), ErrorCode(0).
08:54:15,469 78637500 ERROR (org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:146)) - I/O Error: Connection reset
08:54:15,471 78637502 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.releaseConnection(BaseHikariPool.java:242)) - Connection returned to pool ZEUS_APP_POOL is broken or evicted. Closing connection.
08:54:15,473 78637504 ERROR (org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:146)) - Invalid state, the Connection object is closed.
08:54:15,473 78637504 DEBUG (com.zaxxer.hikari.pool.PoolUtilities.quietlyCloseConnection(PoolUtilities.java:52)) - Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@6d81af64
08:54:15,474 78637505 ERROR (org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:504)) - Application exception overri
dden by rollback exception
08:54:15,479 78637510 ERROR (idiada.pattern.business.modules.GenericConsultorDelegate.countObjectsBySearchCriteriaRoTx(GenericConsultorDelegate.java:535)) - Could not close connection
08:54:22,243 78644274 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=29, inUse=3, avail=26, waiting=0)
08:54:22,244 78644275 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=29, inUse=3, avail=26, waiting=0)
08:54:22,297 78644328 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:54:29,397 78651428 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:54:36,086 78658117 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:54:36,810 78658841 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:54:52,243 78674274 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:54:52,243 78674274 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:54:52,244 78674275 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:55:22,244 78704275 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:55:22,245 78704276 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:55:22,246 78704277 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=3, avail=27, waiting=0)
08:55:52,244 78734275 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=2, avail=28, waiting=0)
08:55:52,245 78734276 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=2, avail=28, waiting=0)
08:55:52,246 78734277 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=2, avail=28, waiting=0)
08:56:22,247 78764278 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
08:56:22,247 78764278 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
08:56:22,249 78764280 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
08:56:47,695 78789726 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:56:51,687 78793718 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:56:52,245 78794276 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:56:52,246 78794277 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:56:52,247 78794278 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:56:58,767 78800798 WARN (com.zaxxer.hikari.proxy.ConnectionProxy.checkException(ConnectionProxy.java:109)) - Connection net.sourceforge.jtds.jdbc.JtdsConnection@2e031772 (ZEUS_APP_POOL) marked
as broken because of SQLSTATE(08S01), ErrorCode(0).
08:56:58,773 78800804 ERROR (org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:146)) - I/O Error: Connection reset
08:56:58,775 78800806 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.releaseConnection(BaseHikariPool.java:242)) - Connection returned to pool ZEUS_APP_POOL is broken or evicted. Closing connection.
08:56:58,777 78800808 ERROR (org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:146)) - Invalid state, the Connection object is closed.
08:56:58,777 78800808 DEBUG (com.zaxxer.hikari.pool.PoolUtilities.quietlyCloseConnection(PoolUtilities.java:52)) - Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@2e031772
08:56:58,778 78800809 ERROR (org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:504)) - Application exception overri
dden by rollback exception
08:56:58,783 78800814 ERROR (idiada.pattern.business.modules.GenericConsultorDelegate.getObjectsByExampleRoTx(GenericConsultorDelegate.java:166)) - Could not close connection
08:57:22,246 78824277 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=29, inUse=6, avail=23, waiting=0)
08:57:22,247 78824278 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=29, inUse=6, avail=23, waiting=0)
08:57:22,330 78824361 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:57:34,065 78836096 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:57:42,205 78844236 WARN (com.zaxxer.hikari.util.LeakTask.run(LeakTask.java:86)) - Connection leak detection triggered, stack trace follows
08:57:52,246 78854277 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:57:52,247 78854278 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:57:52,248 78854279 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:58:22,248 78884279 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
08:58:22,248 78884279 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
Then after few hours later, the same 4 connections are still "inUse", and this value never reduces:
13:08:22,536 93884567 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
13:08:22,537 93884568 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After cleanup pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
13:08:22,537 93884568 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=4, avail=26, waiting=0)
What can be happening?
Thanks. It's midnight here in Tokyo, I'll take a look at this more deeply in the morning.
@sergi-mm Ok, here is what I get from the log above...
As far as I can tell, HikariCP is function properly. Whenever a connection is detected as broken, HikariCP is correctly removing it from the pool. You can see this by the drop in the total connections.
First occurrence:
08:53:52,243 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=1, avail=29, waiting=0)
08:54:15,463 WARN ... - Connection net.sourceforge.jtds.jdbc.JtdsConnection@6d81af64 (ZEUS_APP_POOL) marked as broken because of SQLSTATE(08S01), ErrorCode(0).
...
08:54:22,243 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=29, inUse=3, avail=26, waiting=0)
Second occurrence:
08:56:52,247 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - After fill pool stats ZEUS_APP_POOL (total=30, inUse=5, avail=25, waiting=0)
08:56:58,767 WARN ... - Connection net.sourceforge.jtds.jdbc.JtdsConnection@2e031772 (ZEUS_APP_POOL) marked as broken because of SQLSTATE(08S01), ErrorCode(0).
...
08:57:22,246 DEBUG (com.zaxxer.hikari.pool.BaseHikariPool.logPoolState(BaseHikariPool.java:537)) - Before cleanup pool stats ZEUS_APP_POOL (total=29, inUse=6, avail=23, waiting=0)
Of more concern are the leak detection logs. Just because leak detection fires, doesn't mean that a connection was actually leaked -- it just means that it wasn't returned to the pool within the leakDetectionThreshold
. I would suggest raising the leakDetectionThreshold
to eliminate any possibly "long use" connections and catch only truly leaked connections. Whatever your current threshold is, I would say double that value.
Equally concerning is that neither the leak log nor the broken connection logs emitted stacktraces, as they should have. If you are running slf4j with a version less than version 1.7.0, you need to upgrade this dependency (current is 1.7.10).
I have published HikariCP 2.3.4 to the central repository, it should be available about 3 hours after this comment. With respect to this issue, it just includes a little be better logging of leaked connections.
Hello,
After some monitoring, we only got this logs when problem happens:
2015-03-20 07:14:19.851 DEBUG [HikariCP connection closer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@3b0a8eae
2015-03-20 07:14:19.852 DEBUG [Hikari Housekeeping Timer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@170e76be
2015-03-20 07:14:19.852 DEBUG [HikariCP connection closer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@6f90c92f
2015-03-20 07:14:19.853 DEBUG [HikariCP connection closer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@43f46ded
2015-03-20 07:14:19.853 DEBUG [HikariCP connection closer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@3eab8b9a
2015-03-20 07:14:19.853 DEBUG [HikariCP connection closer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@51bb1d1f
2015-03-20 07:14:19.854 DEBUG [HikariCP connection closer (pool 12-DB12)] com.zaxxer.hikari.pool.PoolUtilities [PoolUtilities.java:52] Closing connection net.sourceforge.jtds.jdbc.JtdsConnection@380b6fb6
Such logs also appears constantly so it looks like nothing unusual. But after some investigation on SQL Server side, some CPU usage spike were found at the same time when error happens, so we changed our configuration a little bit in order to give some more resources and large time limits, and it looks like its working ok for now, but we will need to work on this one more, since no queries here should last more than 5 sec. We will upgrade to 2.3.4 ASAP and see what info will it provide us.
@adnanisajbeg You can set the com.zaxxer.hikari.pool.PoolUtilities
class to INFO level logging to keep down the noise, while still running com.zaxxer.hikari
at DEBUG.
@adnanisajbeg Just to let you know, we've committed changes to the master branch to add "connection closure reasons" to the debug log of the connection closer above. It will appear in the future 2.4.0 release.
That's great! Thank you
On Sat, Mar 28, 2015 at 2:16 PM, Brett Wooldridge notifications@github.com wrote:
@adnanisajbeg https://github.com/adnanisajbeg Just to let you know, we've committed changes to the master branch to add "connection closure reasons" to the debug log of the connection closer above. It will appear in the future 2.4.0 release.
— Reply to this email directly or view it on GitHub https://github.com/brettwooldridge/HikariCP/issues/104#issuecomment-87226673 .
Hi
I'm having a problem after upgrading HikariCP version 2.4.5 from c3p0 version 0.9.1.2. After 1 month in production the error occured in Connection is not available. I'm using Java 1.7.0_80, hibernate 3.2.6, Oracle 10g and here's the configuration in hikaricp 2.4.5.
initializationFailFast set to true maximumPoolSize set to 200
ERROR [qtp299564770-1087632] JDBCExceptionReporter.logExceptions(78) | HikariPool-1 - Connection is not available, request timed out after 30000ms. WARN [qtp299564770-1087632] JDBCExceptionReporter.logExceptions(77) | SQL Error: 0, SQLState: null ERROR [qtp299564770-1087632] JDBCExceptionReporter.logExceptions(78) | HikariPool-1 - Connection is not available, request timed out after 30000ms. WARN [qtp299564770-1086408] JDBCExceptionReporter.logExceptions(77) | SQL Error: 0, SQLState: null ERROR [qtp299564770-1086408] JDBCExceptionReporter.logExceptions(78) | HikariPool-1 - Connection is not available, request timed out after 30006ms. WARN [qtp299564770-1086408] JDBCExceptionReporter.logExceptions(77) | SQL Error: 0, SQLState: null ERROR [qtp299564770-1086408] JDBCExceptionReporter.logExceptions(78) | HikariPool-1 - Connection is not available, request timed out after 30006ms. WARN [qtp299564770-1087656] JDBCExceptionReporter.logExceptions(77) | SQL Error: 0, SQLState: null ERROR [qtp299564770-1087656] JDBCExceptionReporter.logExceptions(78) | HikariPool-1 - Connection is not available, request timed out after 30008ms. WARN [qtp299564770-1087656] JDBCExceptionReporter.logExceptions(77) | SQL Error: 0, SQLState: null ERROR [qtp299564770-1087656] JDBCExceptionReporter.logExceptions(78) | HikariPool-1 - Connection is not available, request timed out after 30008ms. WARN [qtp299564770-1087641] JDBCExceptionReporter.logExceptions(77) | SQL Error: 0, SQLState: null
I have a few suggestions and a few questions.
First, upgrade to 2.4.6. Second, enable the leakDetectionThreshold, maybe something like one or two minutes.
What database are you using? 200 connections seems quite excessive, maybe in your application it seems necessary, but I do recommend reading this article about Pool Sizing.
Hi,
we recently moved from dbcp 1.4.x to hikaricp 2.4.9 (java 7) but running on a JRE 1.8.91, with database as Oracle 12c. A couple of days back we noticed in our logs
2016-11-09 12:18:39,641 [ActiveMQ Session Task-3] ERROR com.example.test.Processor - pool1 - Connection is not available, request timed out after 34966ms. java.sql.SQLTransientConnectionException: pool1 - Connection is not available, request timed out after 34966ms. at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:550) at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:188) at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:147) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:99)
2016-11-09 12:18:50,625 [ActiveMQ Session Task-2] ERROR com.example.test.Processor - pool1 - Connection is not available, request timed out after 45114ms. java.sql.SQLTransientConnectionException: pool1 - Connection is not available, request timed out after 45114ms. at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:550) at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:188) at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:147) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:99)
The configuration for the pool is as follows
Max Pool Size - 1000 (please don't ask why, for now we need it this high) Min Idle - 1 Idle Timeout - 60000 connectionInitSql - some alter session scripts
I doubt there are connections leaks which is ending up using all the connections, however i haven't ruled out the scenario completely. Also there is no cause attached to the exception trace which would hint that db connection creation timeout was hit. I have enabled hikaricp specific logs to monitor the activity but i haven't come across the timeout scenario yet. could this be because of a race condition?
Regards, -Yogesh
If there is no cause attached it generally means that it was not the result of a connection failure.
An important fact to keep in mind is that HikariCP timeouts are extremely accurate under normal operating conditions. This means that a 30sec connection timeout will normally trigger within 100ms of that. Of the two timeout exceptions you showed above, one triggered at 34966 and the other at 45114. This indicates to me that the server is seriously over committed in terms of CPU.
I recommend monitoring the server CPU (and DB server CPU) with something that maintains some history, and validating whether timeout events are correlated with load spikes.
If there is no cause attached it generally means that it was not the result of a connection failure.
Thats right .. i misspelled wasn't to was
This indicates to me that the server is seriously over committed in terms of CPU.
let me check the monitoring logs for that period and get back to you
@dancernetworks you were spot on... the cpu was loaded.
Thank you for the help!
I have the same problem.
java.sql.SQLTimeoutException: Timeout after 1501ms of waiting for a connection. at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:233) at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:183) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
How can I reset the pool safely in order to get rid of this connection that seems to be stuck? By safely I mean to ensure that the other connections finish their stuff.
Correct me if I am wrong but if I call close() on the HikariDataSource object, that operation is not safe. Then after the close call I would reinitialize the HikariDataSource object. But this seems not safe and extreme to me. A method that would remove the rogue connections from the pool would be nice to have as an option.
I am running HikariCP 3.1.0, running into the same problem. I see many idle connections and Hikari not cleaning them up, it's causing the entire application to freeze. I have set leak detection, but nothing helpful there. RIght now I am using session.getCurrentSession() in hibernate and later flushing everything. I think I will migrate the code to open and close session manually. Automatic doesn't seem to work. Any ideas.
Hello,
Try to check your connection's information, login password either than your host and permissions
I had the same problem and then i verified my data and i worked
[DEBUG] [] [HikariPool-1 housekeeper] [c.zaxxer.hikari.pool.HikariPool] - HikariPool-1 - Pool stats (total=20, active=0, idle=20, waiting=0). The application freezes, i am using RabbitMq and Mysql Database.
My setting is as follows:
config.setDriverClassName("com.mysql.cj.jdbc.Driver"); config.setJdbcUrl("jdbc:mysql://localhost/rahisibet"); config.setUsername("root"); config.setPassword("hacker90"); config.setMaximumPoolSize(20); config.addDataSourceProperty("cachePrepStmts", "true"); config.addDataSourceProperty("prepStmtCacheSize", "250"); config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048"); config.addDataSourceProperty("useServerPrepStmts","true"); config.addDataSourceProperty("useLocalSessionState","true"); config.addDataSourceProperty("rewriteBatchedStatements","true"); config.addDataSourceProperty("cacheResultSetMetadata","true"); config.addDataSourceProperty("cacheServerConfiguration","true"); config.addDataSourceProperty("maintainTimeStats","true"); config.addDataSourceProperty("elideSetAutoCommits","true"); config.addDataSourceProperty("connectionTimeout","2500"); config.addDataSourceProperty("leakDetectionThreshold","true"); config.setIdleTimeout(30); config.setLeakDetectionThreshold(60 * 1000);
I am testing a little udp server I created. In my test environment, I have a udp client that runs every 30 seconds and it sends a packet to the server. The server decodes packet and inserts data into postgresql database. There is absolutely no load on this. Yet, sometimes I get this error:
java.sql.SQLException: Timeout of 30000ms encountered waiting for connection
resulting from this code:
dbPool is simply an abstraction of HikariCp:
Why am I getting a connection timeout with hikaricp within this light test environment with only one request every 30 seconds?