Open sidprak opened 8 years ago
Not a maintainer - I've suspected https://github.com/twitter/twemproxy/pull/595 for a similar issue with memcached instead of redis but that seems almost certainly unrelated to the original issue - your use case has auto_eject_hosts: false
and wouldn't be affected
It's almost certainly too late since this issue was filed in 2015, and this may have been fixed since then or the setup may have changed making this inactionable
htop
) - nutcracker is single threaded so I'd wonder if this was caused by cpu exhaustionAlso, were pipelined requests used? That might explain a spike in cpu usage and memory usage due to other open issues reported for mbuf_split being inefficient, but the fact that the default small mbuf size is used makes that seem really unlikely
This may be fixed by patches merged into twitter/twemproxy planned for 0.5.0
Hello,
We just experienced an issue across multiple of our Twemproxy servers that caused them to hang and stop responding to Redis commands. The servers both started around the same time (if that matters) and both stopped responding within 30 seconds of each other. I was wondering if we could get some of your insight into what happened.
The server seemed to be operating normally, suddenly started consuming a lot of memory and got into a hung state where it didn't respond to any commands. It also didn't log anything during the time it was unavailable. We run automated monitoring on the backend Redis servers and we've verified that they were all alive and responding to commands during this time. The issue resolved itself when I restarted Twemproxy. The
connection timed out
message indicates that it may be a problem with the Redis server not responding in time but it is also odd that 1) the server was responding to our monitoring and 2) the issue resolved itself after restarting the proxy. Is there a certain Redis command that could cause something like this?Configuration
Twemproxy log
ps
logThis is a list of
ps
entries for Nutcracker every minute during the window.