bigdata4u / spymemcached

Automatically exported from code.google.com/p/spymemcached
0 stars 0 forks source link

So many Threads waiting for the get method #89

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
I catch some problems recently. Our website come very slowly in pick time
and i dump the jvm and find there have hundreds of Thread in
TIMED_WAITING(parking) status. the cpu and memory usaged is not very high.
Here is the detail information:

 "ActiveMQ Session Task" prio=10 tid=0x00002aabbd798800 nid=0x6f63 waiting
on condition [0x000000006aa60000..0x000000006aa60d90]
   java.lang.Thread.State: TIMED_WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00002aaac5385f20> (a
java.util.concurrent.CountDownLatch$Sync)
        at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(Abstr
actQueuedSynchronizer.java:947)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(Abst
ractQueuedSynchronizer.java:1239)
        at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:253)
        at
net.spy.memcached.MemcachedClient$OperationFuture.get(MemcachedClient.java:1486)
        at
net.spy.memcached.MemcachedClient$GetFuture.get(MemcachedClient.java:1539)
        at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:758)
        at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:778)
        at
com.googlecode.hibernate.memcached.spymemcached.SpyMemcache.get(SpyMemcache.java
:29)
        at
com.googlecode.hibernate.memcached.MemcachedCache.memcacheGet(MemcachedCache.jav
a:124)
        at
com.googlecode.hibernate.memcached.MemcachedCache.get(MemcachedCache.java:153)
        at
org.hibernate.cache.NonstrictReadWriteCache.get(NonstrictReadWriteCache.java:69)
        at
org.hibernate.cache.impl.bridge.EntityAccessStrategyAdapter.get(EntityAccessStra
tegyAdapter.java:55)
        at
org.hibernate.event.def.DefaultLoadEventListener.loadFromSecondLevelCache(Defaul
tLoadEventListener.java:524)
        at
org.hibernate.event.def.DefaultLoadEventListener.doLoad(DefaultLoadEventListener
.java:397)
        at
org.hibernate.event.def.DefaultLoadEventListener.load(DefaultLoadEventListener.j
ava:165)
        at
org.hibernate.event.def.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventLis
tener.java:223)
        at
org.hibernate.event.def.DefaultLoadEventListener.onLoad(DefaultLoadEventListener
.java:126)
        at org.hibernate.impl.SessionImpl.fireLoad(SessionImpl.java:905)
        at org.hibernate.impl.SessionImpl.get(SessionImpl.java:842)
        at org.hibernate.impl.SessionImpl.get(SessionImpl.java:835)
        at
com.crushorflush.service.notification.serializer.HibernateEntitySerializer.unSer
ialize(HibernateEntitySerializer.java:32)
        at
com.crushorflush.service.notification.impl.activemq.ActiveMQManager.notifyListen
ers(ActiveMQManager.java:156)
        at
com.crushorflush.service.notification.impl.activemq.ActiveMQLoginAgent$1.onMessa
ge(ActiveMQLoginAgent.java:70)
        at
org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.jav
a:1021)
        - locked <0x00002aab4a084a78> (a java.lang.Object)
        at
org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.jav
a:122)
        at
org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java
:192)
        at
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
        at
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:8
86)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)

So we need to restart the web servers and then it works well. It happens
some times every several days.

  I use the Hibernate-memcached 1.1.0 and with memcached 1.2.6 in Red Hat
Enterprise Linux Server release 5.3 (Tikanga). The JDK is 1.6.0_11. and the
spymemcached 
 is 2.2.
  I check the source code and have no idea what's happened. It worked well
for a long time.
  does anyone else catch the same problem? or have any comments on that?
  Thanks very much.

Hogan

Original issue reported on code.google.com by hoga...@gmail.com on 15 Sep 2009 at 1:30

GoogleCodeExporter commented 8 years ago
Ya i have the same issue. 
We have 25 threads which make requests to memcached. And they are works slowly, 
after
profiling i found this:
_______________________________
Name,"Time (ms)","Count"
sun.nio.ch.WindowsSelectorImpl.wakeup(),"74771","11664"
net.spy.memcached.MemcachedConnection.addOperation(MemcachedNode, 
Operation),"",""
net.spy.memcached.MemcachedConnection.addOperation(String, Operation),"",""
net.spy.memcached.MemcachedClient.addOp(String, Operation),"",""
net.spy.memcached.MemcachedClient.asyncGet(String, Transcoder),"",""
net.spy.memcached.MemcachedClient.asyncGet(String),"",""
_______________________________

So every thread spend 74 sec of total 300 sec for waiting...
Is there any way to fix this, could be add some connection pooling?

Original comment by alexkhim...@gmail.com on 30 Oct 2009 at 9:20

GoogleCodeExporter commented 8 years ago
All this is really telling me is that there's a lot of time spent on the 
network.

Do these ever complete or do they hang indefinitely.

The specific behavior of the windows wakeup seems to be another issue.  That's 
saying
it takes about 6 milliseconds every time I tell the connection to wake up 
because
there's more local work to do.  I'd consider that a bug in Windows and/or Java.

Original comment by dsalli...@gmail.com on 11 Nov 2009 at 7:23

GoogleCodeExporter commented 8 years ago
Is there any way to increase connections count to memcached? Some type of 
connections
pooling like in Xmemcached? 
I think i have that blocking/waiting issue because all threads use 1 
spymemecached
connection to communicate with server. 

Original comment by alexkhim...@gmail.com on 11 Nov 2009 at 7:55

GoogleCodeExporter commented 8 years ago
Why do you feel more connections would help?

The only theoretical advantage multiple connections will get you would be to 
cheat
the TCP congestion avoidance algorithm in certain cases.  It's otherwise the 
same two
processes talking.

There's no reported congestion on the thread.  The closest I see is Windows 
being
slow to handle wakeup events on a selector.  The same selector would be used 
for more
connections, so if anything, I'd expect it to be worse.

Adding more connections means reducing multiget/multiset escalations and
deduplication optimizations, so it could very well make things worse.

It's possibly worth testing, but it's not intuitive to me that more connections 
would
help this issue.

Original comment by dsalli...@gmail.com on 11 Nov 2009 at 8:17