ika1209 / redis

Automatically exported from code.google.com/p/redis
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

connection timeout #500

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What version of Redis you are using, in what kind of Operating System?
2.2
What is the problem you are experiencing?
We have a redis server running with our webserver (which is in same LAN) 
connects to redis using phpredis module. At normal condition it works fast, 
when there are more than 100 connections open on redis port we have found 
network timeouts randomly. I have put two scripts on webserver for testing

1. Php script which connect to tcp port 6379 every 10 secs and log when 
connection cant establish on 5 secs.
2. php script which connects using predis client and send ping to the redis 
server, this also run every 10 secs. 

Both scripts timeout almost at same time. There is nothing in the logs. Can you 
please provide some steps to pin point the exact issue here?

What steps will reproduce the problem?

Do you have an INFO output? Please past it here.

No

If it is a crash, can you please paste the stack trace that you can find in
the log file or on standard output? This is really useful for us!

Please provide any additional information below.

-Vij

Original issue reported on code.google.com by vijeeshk...@gmail.com on 28 Mar 2011 at 8:22

GoogleCodeExporter commented 9 years ago
A number of issues can cause this. For instance, Redis is overloaded and cannot 
server newly connected clients at that moment (which is why the INFO output is 
helpful here). It might as well be a PHP issue, or an issue related to your 
network configuration (is the Redis host itself reachable?). Another cause 
might be: Redis is configured with a very low timeout, and closes the 
connections when there is no activity (this needs to be explicitly configured, 
the default timeout is 300s), although disconnecting timed out clients shows up 
in the log.

Original comment by pcnoordh...@gmail.com on 29 Mar 2011 at 8:37

GoogleCodeExporter commented 9 years ago
Thanks. redis server itself is reachable it seems.
Can you give me some troubleshooting steps? Can we set the loglevel to debug on 
the fly?
We are using the default timeout only, and every script is opening new 
connection sending commands and then closing it. 

-Vij

Original comment by vijeeshk...@gmail.com on 29 Mar 2011 at 9:35

GoogleCodeExporter commented 9 years ago
Also, we are sending only 100s of requests per sec..not more. As per redis docs 
server ready to handle more than 100k's of rquests

-vij

Original comment by vijeeshk...@gmail.com on 29 Mar 2011 at 9:46

GoogleCodeExporter commented 9 years ago
I have tried to run tcpdump to log the packet, when the connections were 
dropped, server didnt send ack to the client. There is nothing else in the dump 
file. 
What may be the reason? Will it be in Linux side or in redis itself. 

Thanks for your help

-Vij

Original comment by vijeeshk...@gmail.com on 29 Mar 2011 at 11:57

GoogleCodeExporter commented 9 years ago
Tried to log the number of tcp connections on port 6379 every minute, and when 
the issue happens always the connection count is more than 100 
At normal times its less than 10

-Vij

Original comment by vijeeshk...@gmail.com on 29 Mar 2011 at 12:42

GoogleCodeExporter commented 9 years ago
Maybe the PHP process is running out of file descriptors. Could you check the 
fd limit for both the processes running Redis and PHP (ulimit -n)?

Original comment by pcnoordh...@gmail.com on 29 Mar 2011 at 1:08

GoogleCodeExporter commented 9 years ago
hmm..yea it is set to 1024 .. i am logging the total number of redis connections
every minute  and it looks like there were instances when there are 700+ open 
connections(using netstat) . I think it had hit the 1000 limits and passed 
these errors.
I have set the timeout value to 300, in my scripts am not using same 
connections for multiple operations. Value to 30 will be fine in that case?

-Vij

Original comment by vijeeshk...@gmail.com on 29 Mar 2011 at 7:00

GoogleCodeExporter commented 9 years ago
Looks like issue is not due to the filedescriptior, i have logged number of 
open files every 10 secs and it never reached limit of 1025

Can you please help me to find out what is the issue? I have tried to analyze 
the tcpdump and found packets are not lost in network, server just didnt sent 
ack for the new requests when issue happened. Please let me know if u need more 
info ,

Thanks

-Vij

Original comment by vijeeshk...@gmail.com on 1 Apr 2011 at 10:27

GoogleCodeExporter commented 9 years ago
issue happens when the there are more than 100 established connections to the 
redis server

Original comment by vijeeshk...@gmail.com on 1 Apr 2011 at 10:39

GoogleCodeExporter commented 9 years ago
redis_version:2.2.0
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:24829
uptime_in_seconds:249837
uptime_in_days:2
lru_clock:142324
used_cpu_sys:9589.27
used_cpu_user:2645.72
used_cpu_sys_childrens:731.97
used_cpu_user_childrens:134.06
connected_clients:1
connected_slaves:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:2235368128
used_memory_human:2.08G
used_memory_rss:3167801344
mem_fragmentation_ratio:1.42
use_tcmalloc:0
loading:0
aof_enabled:0
changes_since_last_save:2241
bgsave_in_progress:0
last_save_time:1301657468
bgrewriteaof_in_progress:0
total_connections_received:13739616
total_commands_processed:89080005
expired_keys:0
evicted_keys:0
keyspace_hits:70052629
keyspace_misses:5922576
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master

here is redis info o/p 

Original comment by vijeeshk...@gmail.com on 1 Apr 2011 at 11:33

GoogleCodeExporter commented 9 years ago
we experience the same issue - with 2.4.6 redis

Original comment by pavelbar...@gmail.com on 25 Jan 2012 at 4:19

GoogleCodeExporter commented 9 years ago
we also experience this problem. Our PHP application report some "connection 
timeout" error.

The redis version is 2.4.10. Installed on Ubuntu 11.10 (oneiric) with 1.7Gb of 
RAM. CPU load average is very low and there still plenty of free RAM.

Original comment by g...@healthwarehouse.com on 20 Apr 2012 at 3:43

GoogleCodeExporter commented 9 years ago
Same proplem here...

redis-cli -a "*****" info
# Server
redis_version:2.5.11
redis_git_sha1:00000000
redis_git_dirty:0
os:Linux 2.6.32-220.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:10664
run_id:0da5f47fff19894c203b8a1ce26eb6e2de2e8f48
tcp_port:6379
uptime_in_seconds:1469
uptime_in_days:0
lru_clock:1030772

# Clients
connected_clients:85
client_longest_output_list:79
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:1712614032
used_memory_human:1.59G
used_memory_rss:1764986880
used_memory_peak:1719926112
used_memory_peak_human:1.60G
used_memory_lua:30720
mem_fragmentation_ratio:1.03
mem_allocator:jemalloc-3.0.0

# Persistence
loading:0
rdb_changes_since_last_save:14274
rdb_bgsave_in_progress:0
rdb_last_save_time:1352485004
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:28
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1

# Stats
total_connections_received:390790
total_commands_processed:8306950
instantaneous_ops_per_sec:6138
rejected_connections:0
expired_keys:197
evicted_keys:0
keyspace_hits:1431736
keyspace_misses:5722620
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:29199

# Replication
role:master
connected_slaves:1
slave0:94.75.227.108,43678,send_bulk

# CPU
used_cpu_sys:537.10
used_cpu_user:207.88
used_cpu_sys_children:126.56
used_cpu_user_children:393.94

# Keyspace
db0:keys=1925581,expires=1

So what can i do now?

Original comment by ad...@ktbt.ru on 9 Nov 2012 at 6:18

GoogleCodeExporter commented 9 years ago
I also see this issue with my redis instances hosted on ec2 a lot.

redis_version: 2.4.17
arch_bits: 64
multiplexing_api: epoll
gcc_version: 4.4.3
uptime_in_seconds: 1645462
uptime_in_days: 19
lru_clock: 1843244
used_cpu_sys: 76865.59
used_cpu_user: 23388.94
used_cpu_sys_children: 675.00
used_cpu_user_children: 1701.44
connected_clients: 302
connected_slaves: 0
client_longest_output_list: 2
client_biggest_input_buf: 0
blocked_clients: 3
used_memory: 684858696
used_memory_human: 653.13M
used_memory_rss: 860618752
mem_fragmentation_ratio: 1.26
mem_allocator: jemalloc-3.0.0
loading: 0
aof_enabled: 1
changes_since_last_save: 22412973
bgsave_in_progress: 0
last_save_time: 1360575666
bgrewriteaof_in_progress: 0
expired_keys: 339809
evicted_keys: 0
keyspace_hits: 359020680
keyspace_misses: 139879596
pubsub_channels: 0
pubsub_patterns: 0
latest_fork_usec: 126223
vm_enabled: 0
role: master
aof_current_size: 823029796
aof_base_size: 600604986
aof_pending_rewrite: 0
aof_buffer_length: 0
aof_pending_bio_fsync: 0
db0: keys=235044,expires=233870

Looking forward to a solution.
Please let me know what to look for in the output of info.

From,
Morgan

Original comment by morganmc...@gmail.com on 11 Feb 2013 at 7:18

GoogleCodeExporter commented 9 years ago
Hello, Im having this issue in my website too, specially with high traffic 
peaks (+7k online users). Apache error.log shows this:

PHP Fatal error:  Uncaught exception 'Predis\\Connection\\ConnectionException' 
with message 'Connection timed out [tcp://127.0.0.1:6379]'

Everything is working fine (php, mysql, nginx...), but this error hangs the 
page with error 500.

Original comment by migu...@gmail.com on 25 Sep 2013 at 11:06