twemproxy (pronounced "two-em-proxy"), aka nutcracker is a fast and lightweight proxy for memcached and redis protocol. It was built primarily to reduce the number of connections to the caching servers on the backend. This, together with protocol pipelining and sharding enables you to horizontally scale your distributed caching architecture.
To build twemproxy 0.5.0+ from distribution tarball:
$ ./configure
$ make
$ sudo make install
To build twemproxy 0.5.0+ from distribution tarball in debug mode:
$ CFLAGS="-ggdb3 -O0" ./configure --enable-debug=full
$ make
$ sudo make install
To build twemproxy from source with debug logs enabled and assertions enabled:
$ git clone git@github.com:twitter/twemproxy.git
$ cd twemproxy
$ autoreconf -fvi
$ ./configure --enable-debug=full
$ make
$ src/nutcracker -h
A quick checklist:
autoreconf -fvi && ./configure
needs automake
and libtool
to be installedmake check
will run unit tests.
Distribution tarballs for older twemproxy releases (<= 0.4.1) can be found on Google Drive.
The build steps are the same (./configure; make; sudo make install
).
Usage: nutcracker [-?hVdDt] [-v verbosity level] [-o output file]
[-c conf file] [-s stats port] [-a stats addr]
[-i stats interval] [-p pid file] [-m mbuf size]
Options:
-h, --help : this help
-V, --version : show version and exit
-t, --test-conf : test configuration for syntax errors and exit
-d, --daemonize : run as a daemon
-D, --describe-stats : print stats description and exit
-v, --verbose=N : set logging level (default: 5, min: 0, max: 11)
-o, --output=S : set logging file (default: stderr)
-c, --conf-file=S : set configuration file (default: conf/nutcracker.yml)
-s, --stats-port=N : set stats monitoring port (default: 22222)
-a, --stats-addr=S : set stats monitoring ip (default: 0.0.0.0)
-i, --stats-interval=N : set stats aggregation interval in msec (default: 30000 msec)
-p, --pid-file=S : set pid file (default: off)
-m, --mbuf-size=N : set size of mbuf chunk in bytes (default: 16384 bytes)
In twemproxy, all the memory for incoming requests and outgoing responses is allocated in mbuf. Mbuf enables zero-copy because the same buffer on which a request was received from the client is used for forwarding it to the server. Similarly the same mbuf on which a response was received from the server is used for forwarding it to the client.
Furthermore, memory for mbufs is managed using a reuse pool. This means that once mbuf is allocated, it is not deallocated, but just put back into the reuse pool. By default each mbuf chunk is set to 16K bytes in size. There is a trade-off between the mbuf size and number of concurrent connections twemproxy can support. A large mbuf size reduces the number of read syscalls made by twemproxy when reading requests or responses. However, with a large mbuf size, every active connection would use up 16K bytes of buffer which might be an issue when twemproxy is handling large number of concurrent connections from clients. When twemproxy is meant to handle a large number of concurrent client connections, you should set chunk size to a small value like 512 bytes using the -m or --mbuf-size=N argument.
Twemproxy can be configured through a YAML file specified by the -c or --conf-file command-line argument on process start. The configuration file is used to specify the server pools and the servers within each pool that twemproxy manages. The configuration files parses and understands the following keys:
For example, the configuration file in conf/nutcracker.yml, also shown below, configures 5 server pools with names - alpha, beta, gamma, delta and omega. Clients that intend to send requests to one of the 10 servers in pool delta connect to port 22124 on 127.0.0.1. Clients that intend to send request to one of 2 servers in pool omega connect to unix path /tmp/gamma. Requests sent to pool alpha and omega have no timeout and might require timeout functionality to be implemented on the client side. On the other hand, requests sent to pool beta, gamma and delta timeout after 400 msec, 400 msec and 100 msec respectively when no response is received from the server. Of the 5 server pools, only pools alpha, gamma and delta are configured to use server ejection and hence are resilient to server failures. All the 5 server pools use ketama consistent hashing for key distribution with the key hasher for pools alpha, beta, gamma and delta set to fnv1a_64 while that for pool omega set to hsieh. Also only pool beta uses nodes names for consistent hashing, while pool alpha, gamma, delta and omega use 'host:port:weight' for consistent hashing. Finally, only pool alpha and beta can speak the redis protocol, while pool gamma, delta and omega speak memcached protocol.
alpha:
listen: 127.0.0.1:22121
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:
- 127.0.0.1:6379:1
beta:
listen: 127.0.0.1:22122
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
auto_eject_hosts: false
timeout: 400
redis: true
servers:
- 127.0.0.1:6380:1 server1
- 127.0.0.1:6381:1 server2
- 127.0.0.1:6382:1 server3
- 127.0.0.1:6383:1 server4
gamma:
listen: 127.0.0.1:22123
hash: fnv1a_64
distribution: ketama
timeout: 400
backlog: 1024
preconnect: true
auto_eject_hosts: true
server_retry_timeout: 2000
server_failure_limit: 3
servers:
- 127.0.0.1:11212:1
- 127.0.0.1:11213:1
delta:
listen: 127.0.0.1:22124
hash: fnv1a_64
distribution: ketama
timeout: 100
auto_eject_hosts: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:
- 127.0.0.1:11214:1
- 127.0.0.1:11215:1
- 127.0.0.1:11216:1
- 127.0.0.1:11217:1
- 127.0.0.1:11218:1
- 127.0.0.1:11219:1
- 127.0.0.1:11220:1
- 127.0.0.1:11221:1
- 127.0.0.1:11222:1
- 127.0.0.1:11223:1
omega:
listen: /tmp/gamma 0666
hash: hsieh
distribution: ketama
auto_eject_hosts: false
servers:
- 127.0.0.1:11214:100000
- 127.0.0.1:11215:1
Finally, to make writing a syntactically correct configuration file easier, twemproxy provides a command-line argument -t
or --test-conf
that can be used to test the YAML configuration file for any syntax error.
Observability in twemproxy is through logs and stats.
Twemproxy exposes stats at the granularity of server pool and servers per pool through the stats monitoring port by responding with the raw data over TCP. The stats are essentially JSON formatted key-value pairs, with the keys corresponding to counter names. By default stats are exposed on port 22222 and aggregated every 30 seconds. Both these values can be configured on program start using the -c
or --conf-file
and -i
or --stats-interval
command-line arguments respectively. You can print the description of all stats exported by using the -D
or --describe-stats
command-line argument.
$ nutcracker --describe-stats
pool stats:
client_eof "# eof on client connections"
client_err "# errors on client connections"
client_connections "# active client connections"
server_ejects "# times backend server was ejected"
forward_error "# times we encountered a forwarding error"
fragments "# fragments created from a multi-vector request"
server stats:
server_eof "# eof on server connections"
server_err "# errors on server connections"
server_timedout "# timeouts on server connections"
server_connections "# active server connections"
requests "# requests"
request_bytes "total request bytes"
responses "# responses"
response_bytes "total response bytes"
in_queue "# requests in incoming queue"
in_queue_bytes "current request bytes in incoming queue"
out_queue "# requests in outgoing queue"
out_queue_bytes "current request bytes in outgoing queue"
See notes/debug.txt
for examples of how to read the stats from the stats port.
Logging in twemproxy is only available when twemproxy is built with logging enabled. By default logs are written to stderr. Twemproxy can also be configured to write logs to a specific file through the -o
or --output
command-line argument. On a running twemproxy, we can turn log levels up and down by sending it SIGTTIN and SIGTTOU signals respectively and reopen log files by sending it SIGHUP signal.
Twemproxy enables proxying multiple client connections onto one or few server connections. This architectural setup makes it ideal for pipelining requests and responses and hence saving on the round trip time.
For example, if twemproxy is proxying three client connections onto a single server and we get requests - get key\r\n
, set key 0 0 3\r\nval\r\n
and delete key\r\n
on these three connections respectively, twemproxy would try to batch these requests and send them as a single message onto the server connection as get key\r\nset key 0 0 3\r\nval\r\ndelete key\r\n
.
Pipelining is the reason why twemproxy ends up doing better in terms of throughput even though it introduces an extra hop between the client and server.
If you are deploying twemproxy in production, you might consider reading through the recommendation document to understand the parameters you could tune in twemproxy to run it efficiently in the production environment.
Have a bug or a question? Please create an issue here on GitHub!
https://github.com/twitter/twemproxy/issues
Thank you to all of our contributors!
Copyright 2012 Twitter, Inc.
Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0