Closed SLrepo closed 1 year ago
Hi @SLrepo. This exception means the hashtable was forced to resize before getting filled to an acceptable capacity (https://github.com/efficient/libcuckoo/blob/735a74b01610fb8d3614019429af370856e4a88c/libcuckoo/cuckoohash_config.hh#L22). It is there to prevent the possibility of the table resizing in an infinite loop when it cannot successfully rehash the contents of the smaller old table into the larger new one.
Generally if the table needs to resize before reaching a load factor of 0.05, it suggests the hash function is degenerate in some way (e.g. it is only taking the upper bits of your keys, and those bits are always 0 for your keys). If you have a reproduceable code snippet that would be helpful to post.
Otherwise I would try looking at the actual values being produced by your table's hash function and seeing if they look well-distributed or not.
Thanks for the explanation. I was using boost::hash on a vector of int. I can try other ways to hash the vector.
I am running a multithread program. The map is shared among all threads. Within each thread, I have some like this:
auto updatefn = [&, this](std::vector<int> &value) { value.push_back(1); };
for (iterate a list of keys from another source) { simplex_vertices_map_.upsert(key, updatefn, std::vector<int>(1, 0)); }
And it always terminates with an exception error message:
terminate called after throwing an instance of 'libcuckoo::load_factor_too_low' what(): Automatic expansion triggered when load factor was below minimum threshold Aborted (core dumped)
Has anyone experienced it before? Any ideas why?