facebook / CacheLib

Pluggable in-process caching engine to build and scale high performance services
https://www.cachelib.org
Apache License 2.0
1.18k stars 254 forks source link

Minimum Limit For Cache Allocation? #316

Closed aaditya2200 closed 1 month ago

aaditya2200 commented 3 months ago

Hi,

I have been using cachelib to build a swap space. I want the cache to be of a certain size based on an input parameter, and I have designed it as follows:

cache::cache(backing_store bs, uint64_t n) : backstore(bs), max_in_memory_objects(n) { auto itemDestructor = [&](const facebook::cachelib::LruAllocator::DestructorData &data) { write_back(data.item); }; Cache::Config config; config.setCacheSize(1 sizeof(object) * n) .setCacheName("Lrunodes") .setAccessConfig({25, 10}) .setItemDestructor(itemDestructor) .validate(); gcache = std::make_unique(config); defaultpool = gcache->addPool("defaultpool", gcache->getCacheMemoryStats().ramCacheSize); }

n here is the cache size as an input parameter. What I have noticed is that for small values of n, the allocation fails with the error

E20240526 08:58:46.625504 68212 ExceptionTracer.cpp:222] exception stack complete terminate called after throwing an instance of 'std::invalid_argument' what(): not enough memory for slabs

The minimum n for which I can get an allocation seems to be 150K. I was wondering if there is any lower bound to the amount of memory we can allocate?

therealgymmy commented 2 months ago

cachelib uses slab allocator underneath. The cache size needs to be a multiple of 4MBs. (Tho you don't have to specify the exact number we'll internally round down to 4MB-aligned size). Anyway that means you need to make the cache at least 2 Slabs. (we use 1 slab for internal metadata for other slabs).

Typically I would suggest just make the cache 10 slabs or 100 slabs for testing.