Open GoogleCodeExporter opened 9 years ago
As a preliminary remark, Linux is not a real-time operating system.
When synchronous IPCs are done between clients and servers, there is
absolutely no guarantee that the kernel scheduler will enforce
sub-millisecond latency for ALL roundtrips. On the contrary, the
average latency may be quite good while the maximum latency is
quite bad. This is not specific to Redis ...
Now, there are some factors that can make this situation even worse.
For instance:
- if you use Redis VM
- if your machine swaps (at the OS level)
- if CPU consumption is significant
- if all your 500 clients send queries to Redis at the same exact time.
Some remarks:
>> a web script might issue more than 100 requests to a Redis server
Your script should not perform 100 synchronous accesses to Redis,
but rather pipeline the queries. You would probably get much better
response times this way.
>> watch -n1 "time redis-cli get ANY_key"
It seems a poor way to measure latency: the cost of forking and
launching redis-cli is probably higher than the roundtrip you are
trying to measure.
Regards,
Didier.
Original comment by didier...@gmail.com
on 4 Apr 2011 at 4:38
Thanks for the reply Didier, I understand this is not a perfect benchmark, but
wanted to report this issue for other people.
One thing though, I run the same test on mysql and get a much smoother RT,
between 10 and 15ms over the network. The max I have seen was 22ms for the
same "Select time();" query. While on redis, response time varies greatly.
We have VM enabled, but the info client reports it's not being used, CPU
utilization is below 10%, we pipeline queries to redis for secuential parts of
the code, but different parts can issue their own.
What do you think of the time out-retry estrategy? Having timeouts at 5ms and
retrying 5 times, seems to be much better than having 200ms as a timeout. We
use such small timeouts, as a Redis server might be down, and we don't want
requests to be waiting for it (when we use it a caching system). We also had
the case of a "dead" server, that kept the Redis port open, but didn't replied
to any queries until ir was restarted, and each connection waited until the max
time out.
Thanks for your comments
Original comment by npm...@gmail.com
on 4 Apr 2011 at 6:00
Hi again,
sorry, I can only offer speculations (to be taken with a grain
of salt).
MySQL spawns one thread per connection. With queries such as
"select time()", there is no contention since no real data is
accessed. The threads are therefore very responsive and because
they are distributed on all the cores, response time variation
is limited.
Redis, on the other hand, runs in one thread. All the queries are
serialized. So if you have 10 queries whose processing time is
1 ms, response time for the first query will be around 1 ms, but
response time for the last one will be at least 10 ms.
When there is no contention, I would say a single-threaded server
tends to generate more a volatile response time than a
one-thread-per-connection server.
>> We have VM enabled, but the info client reports it's not being used
If it is not used, why not trying to start Redis with no VM at all?
At least you will validate whether the VM has an impact or not.
>> What do you think of the time out-retry estrategy?
I'm rather surprised you have good results with a 5 ms timeout.
It is really close to the typical kernel scheduler time slice.
I'm probably too old and too conservative, but I never use
communication timeouts below 2 seconds on my Unix/Linux systems.
If you suspect scheduling issues, you may want to try to isolate
Redis on its own core (taskset, numactl), or run Redis with a
real-time priority under SCHED_RR or SCHED_FIFO (chrt). Be sure
to deactivate bgsave before trying this.
Regards,
Didier.
Original comment by didier...@gmail.com
on 4 Apr 2011 at 11:09
Original issue reported on code.google.com by
npm...@gmail.com
on 4 Apr 2011 at 2:38