Closed gerasimhovhannisyan closed 3 years ago
How much connections to twemproxy? Does this happen always or just sometimes? Need much more details.
Sent from my iPhone
On 12 Aug 2015, at 17:18, gerasimhovhannisyan notifications@github.com wrote:
Hi all I have errors in nutcracer.log It is running in debug mode and here are logs
2015-08-12 14:08:19.281] nc_core.c:237 close c 529 '127.0.0.1:17958' on event 00FF eof 0 done 0 rb 993790 sb 399415742: Invalid argument [2015-08-12 14:08:19.538] nc_proxy.c:377 accepted c 529 on p 19 from '127.0.0.1:18198' [2015-08-12 14:08:45.414] nc_core.c:237 close c 529 '127.0.0.1:18198' on event 00FF eof 0 done 0 rb 224930 sb 61853847: Invalid argument [2015-08-12 14:08:45.671] nc_proxy.c:377 accepted c 529 on p 19 from '127.0.0.1:18252' [2015-08-12 14:08:46.050] nc_core.c:237 close c 535 '127.0.0.1:18038' on event 00FF eof 0 done 0 rb 1705800 sb 415883481: Invalid argument [2015-08-12 14:08:46.307] nc_proxy.c:377 accepted c 535 on p 19 from '127.0.0.1:18253' [2015-08-12 14:08:56.348] nc_core.c:237 close c 541 '127.0.0.1:17788' on event 00FF eof 0 done 0 rb 2637728 sb 984732541: Invalid argument [2015-08-12 14:08:56.606] nc_proxy.c:377 accepted c 541 on p 19 from '127.0.0.1:18285' [2015-08-12 14:09:00.518] nc_core.c:237 close c 28 '127.0.0.1:18010' on event 00FF eof 0 done 0 rb 805366 sb 343271145: Invalid argument [2015-08-12 14:09:00.774] nc_core.c:237 close c 541 '127.0.0.1:18285' on event 00FF eof 0 done 0 rb 27984 sb 16251824: Invalid argument [2015-08-12 14:09:00.776] nc_proxy.c:377 accepted c 28 on p 19 from '127.0.0.1:18299' [2015-08-12 14:09:01.032] nc_proxy.c:377 accepted c 541 on p 19 from '127.0.0.1:18300' [2015-08-12 14:09:11.612] nc_core.c:237 close c 529 '127.0.0.1:18252' on event 00FF eof 0 done 0 rb 364633 sb 98015102: Invalid argument [2015-08-12 14:09:11.870] nc_proxy.c:377 accepted c 529 on p 19 from '127.0.0.1:18329' [2015-08-12 14:09:32.215] nc_core.c:237 close c 541 '127.0.0.1:18300' on event 00FF eof 0 done 0 rb 808399 sb 146614443: Invalid argument [2015-08-12 14:09:32.472] nc_proxy.c:377 accepted c 541 on p 19 from '127.0.0.1:18371'
I have run it by: nutcracker -c /etc/nutcracker.yml -d -o /var/log/redis/nutcracker.log
hear is the configuration redis-users: listen: 0.0.0.0:22122 redis: true hash: fnv1a_64 distribution: ketama auto_eject_hosts: false server_retry_timeout: 30000 server_failure_limit: 3 timeout: 500000 backlog: 4096 preconnect: true server_connections: 32 servers:
10.10.10.10:6379:1 server0 10.10.10.10:6378:1 server1 10.10.10.10:6377:1 server2 20.20.20.20:6379:1 server3 20.20.20.20:6378:1 server4 20.20.20.20:6377:1 server5 30.30.30.30:6379:1 server6 30.30.30.30:6378:1 server7 30.30.30.30:6377:1 server8 40.40.40.40:6379:1 server9 40.40.40.40:6378:1 server10 40.40.40.40:6377:1 server11 in application site we have lots of errors error_message: 'Redis connection to 127.0.0.1:22122 failed - connect ECONNREFUSED', stack: 'Error: Redis connection to 127.0.0.1:22122 failed - connect ECONNREFUSED\n at RedisClient.on_error (/var/www/node_modules/redis/index.js:196:24)\n at Socket. (/var/www/node_modules/redis/index.js:106:14)\n at Socket.emit (events.js:95:17)\n at net.js:440:14\n at process._tickCallback (node.js:419:13)',
Please suggest what be a cause of problem?
— Reply to this email directly or view it on GitHub.
we have 4 redis master servers and 4 slaves for them, (configured with sentinels), and 9 twem clients. here is log from one . twemproxy installed on same server with application.
{ "curr_connections": 535, "redis-users": { "client_connections": 150, "client_eof": 6, "client_err": 159, "forward_error": 0, "fragments": 0, "server0": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18618154, "requests": 49075, "response_bytes": 12597866367, "responses": 49073, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server1": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 20171089, "requests": 48180, "response_bytes": 2366749798, "responses": 48178, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server10": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16742029, "requests": 51750, "response_bytes": 3123953845, "responses": 51749, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server11": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18257340, "requests": 51945, "response_bytes": 12297768629, "responses": 51944, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server2": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18035241, "requests": 57107, "response_bytes": 3367042094, "responses": 57107, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server3": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 13009541, "requests": 45809, "response_bytes": 7804666983, "responses": 45807, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server4": { "in_queue": 1, "in_queue_bytes": 41, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 23912593, "requests": 67049, "response_bytes": 11522794789, "responses": 67046, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server5": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 13942254, "requests": 83338, "response_bytes": 4223338333, "responses": 83336, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server6": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 32553840, "requests": 52383, "response_bytes": 7299796926, "responses": 52381, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server7": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16535177, "requests": 51263, "response_bytes": 8859539821, "responses": 51263, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server8": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 12908124, "requests": 59403, "response_bytes": 4659124426, "responses": 59402, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server9": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16754161, "requests": 63674, "response_bytes": 4519886135, "responses": 63673, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server_ejects": 0 }, "service": "nutcracker", "source": "server_name", "timestamp": 1439389926, "total_connections": 700, "uptime": 2061, "version": "0.4.0" }
Do you know how to reproduce this issue for testing?
Sent from my iPhone
On 12 Aug 2015, at 17:39, gerasimhovhannisyan notifications@github.com wrote:
we have 4 redis master servers and 4 slaves for them, (configured with sentinels), and 9 twem clients. here is log from one . twemproxy installed on same server with application.
{ "curr_connections": 535, "redis-users": { "client_connections": 150, "client_eof": 6, "client_err": 159, "forward_error": 0, "fragments": 0, "server0": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18618154, "requests": 49075, "response_bytes": 12597866367, "responses": 49073, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server1": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 20171089, "requests": 48180, "response_bytes": 2366749798, "responses": 48178, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server10": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16742029, "requests": 51750, "response_bytes": 3123953845, "responses": 51749, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server11": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18257340, "requests": 51945, "response_bytes": 12297768629, "responses": 51944, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server2": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18035241, "requests": 57107, "response_bytes": 3367042094, "responses": 57107, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server3": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 13009541, "requests": 45809, "response_bytes": 7804666983, "responses": 45807, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server4": { "in_queue": 1, "in_queue_bytes": 41, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 23912593, "requests": 67049, "response_bytes": 11522794789, "responses": 67046, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server5": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 13942254, "requests": 83338, "response_bytes": 4223338333, "responses": 83336, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server6": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 32553840, "requests": 52383, "response_bytes": 7299796926, "responses": 52381, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server7": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16535177, "requests": 51263, "response_bytes": 8859539821, "responses": 51263, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server8": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 12908124, "requests": 59403, "response_bytes": 4659124426, "responses": 59402, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server9": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16754161, "requests": 63674, "response_bytes": 4519886135, "responses": 63673, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server_ejects": 0 }, "service": "nutcracker", "source": "dal-node1", "timestamp": 1439389926, "total_connections": 700, "uptime": 2061, "version": "0.4.0" }
— Reply to this email directly or view it on GitHub.
@gerasimhovhannisyan if you can
using below options. it will show what command is failed.
$ CFLAGS="-ggdb3 -O0" ./configure --enable-debug=full
$ make
$ sudo make install
If it would be memcache, Invalid argument is returned when item's length is over 1Mb. @gerasimhovhannisyan you can try to trace with sysdig/systemtap on requests and try to debug deeper what's going on. But yeah, @charsyam approach goes as first.
@ton31337 thanks. but @gerasimhovhannisyan shows his conf in above.
redis: true
so. it's maybe redis.
it will be hard, we are using it under load. i can provide all logs and do tests On Aug 12, 2015 6:45 PM, "Donatas" notifications@github.com wrote:
Do you know how to reproduce this issue for testing?
Sent from my iPhone
On 12 Aug 2015, at 17:39, gerasimhovhannisyan notifications@github.com wrote:
we have 4 redis master servers and 4 slaves for them, (configured with sentinels), and 9 twem clients. here is log from one . twemproxy installed on same server with application.
{ "curr_connections": 535, "redis-users": { "client_connections": 150, "client_eof": 6, "client_err": 159, "forward_error": 0, "fragments": 0, "server0": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18618154, "requests": 49075, "response_bytes": 12597866367, "responses": 49073, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server1": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 20171089, "requests": 48180, "response_bytes": 2366749798, "responses": 48178, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server10": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16742029, "requests": 51750, "response_bytes": 3123953845, "responses": 51749, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server11": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18257340, "requests": 51945, "response_bytes": 12297768629, "responses": 51944, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server2": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 18035241, "requests": 57107, "response_bytes": 3367042094, "responses": 57107, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server3": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 13009541, "requests": 45809, "response_bytes": 7804666983, "responses": 45807, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server4": { "in_queue": 1, "in_queue_bytes": 41, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 23912593, "requests": 67049, "response_bytes": 11522794789, "responses": 67046, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server5": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 13942254, "requests": 83338, "response_bytes": 4223338333, "responses": 83336, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server6": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 32553840, "requests": 52383, "response_bytes": 7299796926, "responses": 52381, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server7": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16535177, "requests": 51263, "response_bytes": 8859539821, "responses": 51263, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server8": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 12908124, "requests": 59403, "response_bytes": 4659124426, "responses": 59402, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server9": { "in_queue": 0, "in_queue_bytes": 0, "out_queue": 0, "out_queue_bytes": 0, "request_bytes": 16754161, "requests": 63674, "response_bytes": 4519886135, "responses": 63673, "server_connections": 32, "server_ejected_at": 0, "server_eof": 0, "server_err": 0, "server_timedout": 0 }, "server_ejects": 0 }, "service": "nutcracker", "source": "dal-node1", "timestamp": 1439389926, "total_connections": 700, "uptime": 2061, "version": "0.4.0" }
— Reply to this email directly or view it on GitHub.
— Reply to this email directly or view it on GitHub https://github.com/twitter/twemproxy/issues/401#issuecomment-130326982.
without heavy load it behaves normal?
during tests we havent seen any errors On Aug 12, 2015 11:15 PM, "Donatas" notifications@github.com wrote:
without heavy load it behaves normal?
— Reply to this email directly or view it on GitHub https://github.com/twitter/twemproxy/issues/401#issuecomment-130415905.
The error that you are seeing is EINVAL (Invalid Argument). Here are the scenarios that generates EINVAL: https://github.com/twitter/twemproxy/search?utf8=%E2%9C%93&q=EINVAL. My guess is that you are sending requests that not supported by twemproxy
@manjuraj but he said everything was fine without load, that's really strange :)
thank you very much, i will check and call back On Aug 13, 2015 12:06 AM, "Manju Rajashekhar" notifications@github.com wrote:
The error that you are seeing is EINVAL (Invalid Argument). Here are the scenarios that generates EINVAL: https://github.com/twitter/twemproxy/search?utf8=%E2%9C%93&q=EINVAL. My guess is that you are sending requests that not supported by twemproxy
— Reply to this email directly or view it on GitHub https://github.com/twitter/twemproxy/issues/401#issuecomment-130428634.
we check it by scripst, not by original app On Aug 13, 2015 12:09 AM, "Gerasim Hovhannisyan" < gerasim.hovhannisyan@picsart.com> wrote:
thank you very much, i will check and call back On Aug 13, 2015 12:06 AM, "Manju Rajashekhar" notifications@github.com wrote:
The error that you are seeing is EINVAL (Invalid Argument). Here are the scenarios that generates EINVAL: https://github.com/twitter/twemproxy/search?utf8=%E2%9C%93&q=EINVAL. My guess is that you are sending requests that not supported by twemproxy
— Reply to this email directly or view it on GitHub https://github.com/twitter/twemproxy/issues/401#issuecomment-130428634.
with full debug it shows same errors, no extra information.
[2015-08-13 06:59:22.205] nc_core.c:237 close c 239 '127.0.0.1:21038' on event 00FF eof 0 done 0 rb 694528 sb 268571639: Invalid argument [2015-08-13 06:59:22.461] nc_proxy.c:386 accepted c 239 on p 19 from '127.0.0.1:21432' [2015-08-13 06:59:31.802] nc_core.c:237 close c 239 '127.0.0.1:21432' on event 00FF eof 0 done 0 rb 158534 sb 33012868: Invalid argument [2015-08-13 06:59:32.059] nc_proxy.c:386 accepted c 239 on p 19 from '127.0.0.1:21455'
For full debug you need ./configure --enable-debug=log
, I think
For example, this can happen if memcache sends a response that isn't expected. The memcache server can do that if you try to set something that's larger than the memcache key length limit, for example (I think? it's been a while)
twemproxy then deliberately closes the connection if it gets an invalid GET response
default:
/*
* Valid responses for a fragmented requests are MSG_RSP_MC_VALUE or,
* MSG_RSP_MC_END. For an invalid response, we send out SERVER_ERRROR
* with EINVAL errno
*/
mbuf = STAILQ_FIRST(&r->mhdr);
log_hexdump(LOG_ERR, mbuf->pos, mbuf_length(mbuf), "rsp fragment "
"with unknown type %d", r->type);
pr->error = 1;
pr->err = EINVAL;
break;
Closing due to issue age - see https://github.com/twitter/twemproxy/issues/401#issuecomment-828900343 for why "Invalid argument" specifically may be seen (EINVAL)
Hi all I have errors in nutcracer.log It is running in debug mode and here are logs
2015-08-12 14:08:19.281] nc_core.c:237 close c 529 '127.0.0.1:17958' on event 00FF eof 0 done 0 rb 993790 sb 399415742: Invalid argument [2015-08-12 14:08:19.538] nc_proxy.c:377 accepted c 529 on p 19 from '127.0.0.1:18198' [2015-08-12 14:08:45.414] nc_core.c:237 close c 529 '127.0.0.1:18198' on event 00FF eof 0 done 0 rb 224930 sb 61853847: Invalid argument [2015-08-12 14:08:45.671] nc_proxy.c:377 accepted c 529 on p 19 from '127.0.0.1:18252' [2015-08-12 14:08:46.050] nc_core.c:237 close c 535 '127.0.0.1:18038' on event 00FF eof 0 done 0 rb 1705800 sb 415883481: Invalid argument [2015-08-12 14:08:46.307] nc_proxy.c:377 accepted c 535 on p 19 from '127.0.0.1:18253' [2015-08-12 14:08:56.348] nc_core.c:237 close c 541 '127.0.0.1:17788' on event 00FF eof 0 done 0 rb 2637728 sb 984732541: Invalid argument [2015-08-12 14:08:56.606] nc_proxy.c:377 accepted c 541 on p 19 from '127.0.0.1:18285' [2015-08-12 14:09:00.518] nc_core.c:237 close c 28 '127.0.0.1:18010' on event 00FF eof 0 done 0 rb 805366 sb 343271145: Invalid argument [2015-08-12 14:09:00.774] nc_core.c:237 close c 541 '127.0.0.1:18285' on event 00FF eof 0 done 0 rb 27984 sb 16251824: Invalid argument [2015-08-12 14:09:00.776] nc_proxy.c:377 accepted c 28 on p 19 from '127.0.0.1:18299' [2015-08-12 14:09:01.032] nc_proxy.c:377 accepted c 541 on p 19 from '127.0.0.1:18300' [2015-08-12 14:09:11.612] nc_core.c:237 close c 529 '127.0.0.1:18252' on event 00FF eof 0 done 0 rb 364633 sb 98015102: Invalid argument [2015-08-12 14:09:11.870] nc_proxy.c:377 accepted c 529 on p 19 from '127.0.0.1:18329' [2015-08-12 14:09:32.215] nc_core.c:237 close c 541 '127.0.0.1:18300' on event 00FF eof 0 done 0 rb 808399 sb 146614443: Invalid argument [2015-08-12 14:09:32.472] nc_proxy.c:377 accepted c 541 on p 19 from '127.0.0.1:18371'
I have run it by: nutcracker -c /etc/nutcracker.yml -d -o /var/log/redis/nutcracker.log
hear is the configuration redis-users: listen: 0.0.0.0:22122 redis: true hash: fnv1a_64 distribution: ketama auto_eject_hosts: false server_retry_timeout: 30000 server_failure_limit: 3 timeout: 500000 backlog: 4096 preconnect: true server_connections: 32 servers:
in application site we have lots of errors error_message: 'Redis connection to 127.0.0.1:22122 failed - connect ECONNREFUSED', stack: 'Error: Redis connection to 127.0.0.1:22122 failed - connect ECONNREFUSED\n at RedisClient.on_error (/var/www/node_modules/redis/index.js:196:24)\n at Socket. (/var/www/node_modules/redis/index.js:106:14)\n at Socket.emit (events.js:95:17)\n at net.js:440:14\n at process._tickCallback (node.js:419:13)',
Please suggest what be a cause of problem?