Open mohit-gupta-hs1 opened 7 years ago
@mohit-gupta-hs1, which version of mcrouter are you running (or which commit did you build from)? What command line options are you running mcrouter with?
@jmswen Thanks for responding, we are running version 35.0 and fairly simple setup. The following command line options are used:
/usr/local/bin/mcrouter --async-dir=/var/lib/mcrouter/spool --route-prefix=/dc1/all/ --num-proxies=16 --stats-root=/var/lib/mcrouter/stats --log-path=/var/log/mcrouter/mcrouter.log --port=22120 --config-file=/etc/mcrouter/mcrouter.json --send-invalid-route-to-default
if it helps as well, we are running twemproxy on staging with a pool of two nodes in each backend with the following memcached-specific options:
memcached:
listen: 127.0.0.1:22122
hash: fnv1a_64
distribution: ketama
timeout: 900
auto_eject_hosts: true
server_retry_timeout: 20000
server_failure_limit: 2
Looking through the async log I can see that a lot of deletes are not going through the past few days. We setup mcpiper overnight to see if we can get any more output as to why these failures are occurring. Do you have any other debugging steps?
@jmswen so I upped the TKO limit and connection timeout intervals and it seems to be kinda stable now. I have a few questions.. Can a shadow pool get TKO'd? From my experience it doesn't seem like it. Do you have any insight into mcrouter & twemproxy interactions? Do you see any pitfalls from doing something like I've described.
I also meet the same quesion. After I add this parameter "--disable-tko-tracking " while the mcrouter start, then i can get the key
Hey guys, we currently use Twemproxy for memcached connection pooling and hashing. We are moving some of our infrastructure from AWS classic into a VPC and we would like to use mcrouter to shadow all the traffic to the cluster in classic (serving prod traffic) to another cluster in VPC. When the caches become someone consistent we would then do an application config change (or change a setting on mcrouter maybe?) to flip to the new pool. However, we are having some issues in our staging environment and mcrouter, particularly around deletes.We see cache invalidation issues, values still being in the cache when they should have been deleted. Since we use twemproxy for hashing (and mcrouter doesn't have the same hashing algorithm) we have configured two backends in it, one for the classic pool and one for the VPC pool, and then mcrouter is setup to point to both twemproxy backends with the following config:
With this setup we get some issues around deletes. Specifically we see a lot of empty values in memcached, i.e.:
when it should actually be
Do you guys know of anyone who has tried mcrouter and twemproxy? Do you see any issues with the setup?