Open GoogleCodeExporter opened 8 years ago
Hello, yes this is a known problem due to all the seeks required in order to
transfer all the keys from disk to
memory before writing them in the .rdb file. I'll try to investigate better
about this issue, but probably the
solution is going to be the use of memory mapped files for VM.
More news later.
Thanks for the report,
Salvatore
Original comment by anti...@gmail.com
on 7 May 2010 at 9:04
Can't reproduce this with ext4 and flags to speedup things, I wonder if your
filesystem is mounted with atime.
Can you please post here the output of mount? Btw I'm going to try with other
configuration of ext4 to verify if
my hypotesis is correct. Thanks.
Original comment by anti...@gmail.com
on 7 May 2010 at 1:48
New info, with ext4 and any kind of flags I get 9MB/second saving throughput.
4 millions keys, 40 GB swap file. Now trying with 22 millions keys to perfectly
match your use case.
Original comment by anti...@gmail.com
on 7 May 2010 at 2:17
I have got ext3, not ext4.
/dev/md4 on /home/data-redis type ext3 (rw,noatime)
I can try use ext4..
--
Nick
Original comment by nick.pot...@gmail.com
on 7 May 2010 at 2:21
Hello Nick, maybe my benchmark was much different in the outcome as I've a lot
of RAM that is caching the
swap file, checking with so much keys that it is not possible for the system to
cache a decent amount of the swap
file, as I remember that with many many keys I was experiencing slow saving
times as well...
Original comment by anti...@gmail.com
on 7 May 2010 at 2:26
Still investigating all this issues, sorry for the delay, I already committed
patches in the latest days in order to
speedup loading times with VM enabled. Now I'm focusing on saving times...
Thanks for your patience.
Original comment by anti...@gmail.com
on 10 May 2010 at 9:48
For information i used to have the same problem, but it was caused by IO
limitation
on my vm ("cloud computing" no good for everything)
Original comment by stephane.angel
on 11 May 2010 at 9:18
Hello, thanks for your comments/help. Quick update, the vm-speedup branch in
git speeds up loading times a
lot. We are working more on this.... RC1 will be definitely faster than the
other 1.3.x releases that were flying
around in the latest weeks.
Original comment by anti...@gmail.com
on 11 May 2010 at 9:21
We are experiencing this issue with Version 2.0.0 too. But it seems to happen
only if lots of keys are swapped already. Our Machine has a total of 8GB Ram,
OS is Debian 5.0 x86_64
redis> info
redis_version:2.0.0
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:31213
uptime_in_seconds:229338
uptime_in_days:2
connected_clients:2
connected_slaves:0
blocked_clients:0
used_memory:2145992344
used_memory_human:2.00G
changes_since_last_save:12731658
bgsave_in_progress:0
last_save_time:1284742222
bgrewriteaof_in_progress:0
total_connections_received:149254
total_commands_processed:14053410
expired_keys:16823
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:1
role:master
vm_conf_max_memory:2147483648
vm_conf_page_size:32
vm_conf_pages:134217728
vm_stats_used_pages:1925425
vm_stats_swapped_objects:485763
vm_stats_swappin_count:16932
vm_stats_swappout_count:1801410
vm_stats_io_newjobs_len:0
vm_stats_io_processing_len:0
vm_stats_io_processed_len:0
vm_stats_io_active_threads:0
vm_stats_blocked_clients:0
db0:keys=5253259,expires=5253189
Best Regards
Hajo Skwirblies
Original comment by hajo.skw...@googlemail.com
on 20 Sep 2010 at 9:29
hello, antirez. I've same problem likes hajo.skwirblies, the redis saving is
very very very slowly. I deleted 10% keys data(total have 18G) before I execute
shutdown command.
system have 32G RAM, no swap file be used.
cpu is always 100% when saving data
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ OMMAND
21443 www 25 0 22.2g 22g 704 R 100 70.8 2501:35 redis-server
Dump file writing VERY slow (about 50-200K/s)
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 4.00 0.00 280.00 0 280
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 7.00 0.00 376.00 0 376
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 5.00 0.00 304.00 0 304
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.00 0.00 16.00 0 16
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 8.00 0.00 312.00 0 312
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 1.00 0.00 60.00 0 60
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 15.00 0.00 84.00 0 84
Best Regards &
Thank You
antirez
Original comment by liangcha...@msn.com
on 7 Jan 2011 at 6:36
Original issue reported on code.google.com by
nick.pot...@gmail.com
on 7 May 2010 at 8:53