Closed 979357361 closed 3 weeks ago
I also set _max_open_files_ =-1 for better performance, I prefer not to cache meta blocks in block cache. So how can I get a approximate memory usage of this part? By _rocksdb_memory_usaget->_mem_table_readerstotal?
Yes, that looks right. Table reader memory should include the uncached metadata blocks.
By the way, is max_open_files on behalf of meta blocks (index blocks \ filter blocks), what's the difference between the two?
max_open_files
is a limit on the number of table reader objects. A table reader object manages multiple resources, such as a file descriptor and, if cache_index_and_filter_blocks
is false
, an index+filter block.
The index block stores an index into the data blocks (implementation details: https://github.com/facebook/rocksdb/wiki/Index-Block-Format). The filter block stores a filter, which is explained here: https://github.com/facebook/rocksdb/wiki/RocksDB-Bloom-Filter
great, thanks for your time and patience,
Hi, everyone
I want to use rocksdb (C API) on a crowd server, so I need to preserve enough memory for rocksdb. For block cache I can set a limit to it. I also set _max_open_files_ =-1 for better performance, I prefer not to cache meta blocks in block cache. So how can I get a approximate memory usage of this part? By _rocksdb_memory_usaget->_mem_table_readerstotal?
By the way, is max_open_files on behalf of meta blocks (index blocks \ filter blocks), what's the difference between the two? To balance memory usage and performance, I think I should at least keep meta blocks in memory, then the extra memory is used to expand the block cache.