Closed elvinio closed 7 years ago
Hi Elvin,
By default, the table is configured to optimize for maximum performance over memory usage in small tables, but this can be tuned. The empty table overhead is high because the table preallocates 2^16 spin locks to use for locking. But in cuckoohash_config.hh, there is a parameter called LIBCUCKOO_LOCK_ARRAY_GRANULARITY which will, when set to a larger number (up to 16), allocate the locks dynamically in smaller chunks. This has the advantage of lower memory overhead for small tables, at the price of slightly slower access time for locks. Unfortunately this is a global parameter right now and cannot be set separately for the inner and outer tables, but the performance should not degrade very significantly.
Please give it a try and let me know if that helps at all.
Hi Manu
The memory allocation has reduced a lot after I set the lock_array_granularity to a value of 4. It is now 250 kb per table. This definitely helped.
Thanks for your help.
Great to hear!
Hi guys,
I'm looking to use nested tables with this library and noticed the memory consumption is high. I want to check if this is expected.
This is the code I ran based on the nested table example. In this case, I am inserting 1000 inner tables.
The output from top:
The memory consumption was 2.6 Gb after all the tables are allocated. That works out to about 2.6 Mb per table. The memory used is similar even if I allocate 1000 outer tables.
I cannot figure out why each empty hashmap requires 2.6 Mb of memory.
Any help on my code or explanation to help me understand is greatly appreciated.
Thanks in advance.
Elvin