mike252004 / spymemcached

Automatically exported from code.google.com/p/spymemcached
0 stars 0 forks source link

Will adding more servers reduce IO wait times in client-side? #104

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Spy. v2.3.1
OS. Unix with a very fast network.

We have a highly concurrent app that does a lot of memcached operations, 
and our reponse times are high basically when getting elements from 
memcached. If at any given time I get a jstack from my app almost all of my 
threads look like this:

ajp-8009-200" daemon prio=10 tid=0x623dc800 nid=0x6e5e waiting on condition 
[0x5b3fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x657bcf28> (a 
java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(
AbstractQueuedSynchronizer.java:947)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos
(AbstractQueuedSynchronizer.java:1239)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:253)
at 
net.spy.memcached.MemcachedClient$OperationFuture.get(MemcachedClient.java:
1655)
at 
net.spy.memcached.MemcachedClient$GetFuture.get(MemcachedClient.java:1708)

The documentation says that each memcached client will open (and I guess 
synchronize) one IO channel per configured server. Will adding more servers 
to the same "cluster" make this less of a problem? or will the multi-gets 
make this even worse?

Thank in advance for a response,
Regards
AB

Original issue reported on code.google.com by andres.bernasconi@gmail.com on 6 Nov 2009 at 3:04

GoogleCodeExporter commented 8 years ago
Ya interesting question. I think it should help. Because when u do get request,
spymemcached calculates the hash of key and based on key deside which server do
request to get. 

Original comment by alexkhim...@gmail.com on 11 Nov 2009 at 7:58

GoogleCodeExporter commented 8 years ago
I don't think this is a bug, but it may make a good mailing list thread (on 
either
the spymemcached list or the core memcached list).

I'm pretty sure it ends up being application-specific, but others may have some
guidelines for you if you describe your problem in more detail.

This stack itself isn't saying too much other than you are waiting for 
something to
happen.  If you never see that exact latch fire, then that would be a bug, 
though.

Original comment by dsalli...@gmail.com on 11 Nov 2009 at 8:22

GoogleCodeExporter commented 8 years ago
Yeah you are all right, I mistakenly put this under "Issues", and I personally 
don't 
think it is a bug. I was looking more for suggestions. We have a memcached 
setup in 
linux machines with very fast networks and Memcached is our bottleneck. Adding 
more 
servers didn't help..I gues because of the high concurrency of our "get" 
methods, and 
they might be doing multi-gets. Eventually we ended up doing multiple clients 
for the 
same memcached servers and our latency went away; the down side is that we now 
have a 
lot more connections per server / client.

Original comment by andres.bernasconi@gmail.com on 11 Nov 2009 at 12:18

GoogleCodeExporter commented 8 years ago
We have similar problem. Could you please elaborate what do you mean by the 
multiple clients for same servers? Which language do you use?

Original comment by thapaso...@gmail.com on 11 Nov 2013 at 9:27