yinqiwen / ardb

A redis protocol compatible nosql, it support multiple storage engines as backend like Google's LevelDB, Facebook's RocksDB, OpenLDAP's LMDB, PerconaFT, WiredTiger, ForestDB.
BSD 3-Clause "New" or "Revised" License
1.83k stars 278 forks source link

ardb with rocksdb backend used too much memory #403

Open ns-xlz opened 6 years ago

ns-xlz commented 6 years ago

image I used rocksdb as backend of ardb, memory used by ardb-server is more than 50G and 20G RES . this is my rocksdb config:

write_buffer_size=512M;max_write_buffer_number=5;min_write_buffer_number_to_merge=3;compression=kSnappyCompression;\ bloom_locality=1;memtable_prefix_bloom_size_ratio=0.1;\ block_based_table_factory={block_cache=512M;filter_policy=bloomfilter:10:true};\ create_if_missing=true;max_open_files=1000;rate_limiter_bytes_per_sec=50M

And, it is weird that the space of data less than 1G:

du -sh workspace/data/ 981M workspace/data/

why use so much memory? and I found that the memory used grew every day,any idea?

yinqiwen commented 6 years ago

did u set redis-compatible-mode to yes? there is an issue about rocksdb's merge use too much memory. https://github.com/yinqiwen/ardb/issues/391

ns-xlz commented 6 years ago

No, I am sure this option is 'no'. image Is there any option wrong? @yinqiwen

engine rocksdb
daemonize yes
pidfile ${ARDB_HOME}/ardb.pid
thread-pool-size              16
server[0].listen              0.0.0.0:16379

qps-limit-per-host                  0
qps-limit-per-connection            0

rocksdb.compaction           OptimizeLevelStyleCompaction
rocksdb.scan-total-order              false
rocksdb.disableWAL            true
rocksdb.options               write_buffer_size=512M;max_write_buffer_number=5;min_write_buffer_number_to_merge=3;compression=kSnappyCompression;\
                              bloom_locality=1;memtable_prefix_bloom_size_ratio=0.1;\
                              block_based_table_factory={block_cache=512M;filter_policy=bloomfilter:10:true};\
                              create_if_missing=true;max_open_files=1000;rate_limiter_bytes_per_sec=50M

timeout 0
tcp-keepalive 0

data-dir ${ARDB_HOME}/data
slave-workers   4
max-slave-worker-queue  1024

repl-dir                          ${ARDB_HOME}/repl

slave-serve-stale-data yes
slave-priority 100
slave-read-only yes

backup-dir                        ${ARDB_HOME}/backup
backup-file-format                ardb

repl-disable-tcp-nodelay no
repl-backlog-size           1G
repl-backlog-cache-size     100M
snapshot-max-lag-offset     500M
maxsnapshots                10

slave-serve-stale-data yes
slave-cleardb-before-fullresync    yes
repl-backlog-sync-period         5

slave-ignore-expire   no
slave-ignore-del      no

zk-recv-timeout  10000
zk-clientid-file ${ARDB_HOME}/ardb.zkclientid

slave-client-output-buffer-limit 256mb
pubsub-client-output-buffer-limit 32mb

slowlog-log-slower-than 10000
slowlog-max-len 128
lua-time-limit 5000
scan-redis-compatible         yes
scan-cursor-expire-after      60
redis-compatible-mode     no
redis-compatible-version  2.8.0
statistics-log-period     600
range-delete-min-size  100
yinqiwen commented 6 years ago

can u set redis-compatible-mode to yes? the merge operator is enables when redis-compatible-mode is no

ns-xlz commented 6 years ago

ok, I am trying with your suggestion.