Open auror opened 6 years ago
2.2 K QPS is not heavy load, is your data too large?
But you may try to disable the connection pool,it's not recommended.
Hi,
When there's a read timeout on a call, the connection is getting closed and we wanted to avoid that problem by using connection pool. Also we couldn't afford high response time spikes quite often
Thanks
On Wed, Aug 1, 2018 at 5:24 PM dennis zhuang notifications@github.com wrote:
2.2 K QPS is not heavy load, is your data too large?
But you may try to disable the connection pool,it's not recommended.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/killme2008/xmemcached/issues/77#issuecomment-409549284, or mute the thread https://github.com/notifications/unsubscribe-auth/AFIS_mI6IHzlw0M3InAc04LUAbF6cVZNks5uMZcKgaJpZM4TIL-I .
@killme2008 I'm facing the same issue, our data is too large. Can you please help?
Hi,
We've been using XMemcachedClient connected to twemproxy which sits behind set of memcached servers. We've kept a OpTimeout of 30 ms.
Over the time, the Recv Q(from netstat) size of the tcp socket grows really high and prompting timeouts from XMemcached. Twemproxy as a result is also getting killed by going OOM gradually
Is XMemcached client slow in receiving data? Should
TCP_RECV_BUFF_SIZE
be increased?Few more details: