Closed nick-thompson closed 2 years ago
It definitely should be std::pair<int64_T,float*>
. One issue with std::unordered_map
, is that it does both allocations of individual nodes as well as arrays (for the bucket list). This makes it unsuitable for a pool allocator in general. I belive the issue is that it tries to allocate the bucket array, which is bigger than the individual nodes that hold the elements and there isn't a good way of fixing it.
If you insist on using std::unordered_map
as opposed to some other hash table, I would just use a bigger node size.
Got it, thanks @foonathan
If you insist on using std::unordered_map as opposed to some other hash table, I would just use a bigger node size.
I'm open to other hash tables surely; would you recommend anything in particular for use with the pool allocator?
My go to hash table recommendation is Abseil's flat_hash_map
. It only allocates memory in contiguous chunks like std::vector
, so you might not need a custom allocator at all; especially if they're getting big. Otherwise, a stack allocator works.
If you want address stability of the elements, you could use Abseil's node_hash_map
, which does two kinds of allocations: big contiguous ones for the table, and then one small allocation per element. Only the latter can use a pool, but there is no way to provide different allocators there. So if you need that, I'd recommend using flat_hash_map
with a std::unique_ptr
using the pool.
Thank you @foonathan, that's super helpful.
Otherwise, a stack allocator works.
I'll add here briefly for any future readers that the memory_stack
allocator provided here works great with unordered_map
if that happens to fit your memory needs. I'll be evaluating that approach versus Abseil maps + mem pools moving forward.
Thanks again
Just keep in mind that the stack allocator will never free memory. So if the hash table does an internal rehashing, the previous memory stays allocated.
Just keep in mind that the stack allocator will never free memory. So if the hash table does an internal rehashing, the previous memory stays allocated.
Never? Even on map.clear()
?
Yes. You need to manually use the marker on the memory_stack
: https://memory.foonathan.net/classfoonathan_1_1memory_1_1memory__stack.html#a9f30af46c3d77066c680388109be2321
Ok great, thanks.
Separately, and you certainly don't need to answer this as it's unrelated to this project, but
So if the hash table does an internal rehashing,
Is there any way to prevent std::unordered_map
from performing its internal rehash?
I believe if you call reserve it shouldn't rehash unless you exceed the capacity. But I don't know for certain.
Awesome, thanks so much
Hey @foonathan one more question here, sorry to keep bothering you.
I'm experimenting now with memory::unordered_map<size_t, float*, memory::memory_stack<>>
which seems to work great on macos. Strangely, with the same exact code, I'm hitting std::length_error
from inside the standard library implementation on Windows during memory::unordered_map<size_t, float*, memory::memory_stack<>>::emplace
. My debugger drops on a line which suggests that this emplace
causes a rehash and during that rehash we hit a length_error either from attempting to bit shift a size_t
too far or from static_cast<size_t>(-1)
as in the doc example linked. Have you seen anything like this? I'm rather confused, and might be looking in the wrong place, but this same bit of code was working fine before this experiment.
Thanks
Sorry, missed your reply. I have no idea about that error.nbh
Hey, first off thanks for the library, it's been extremely helpful so far.
I'm a new user and looking to use
memory::unordered_map<int64_t, float*, memory::memory_pool<>>
, but I think I'm having trouble initializing the memory pool to use here because I consistently get an error like the following upon first insertion:The relevant code I have so far is:
And then the first insertion throws the above error. I've tried templating
unordered_map_node_size
withstd::pair<int64_t, float*>
as well with no luck.What am I doing wrong here?
Thanks