Closed dotnwat closed 1 month ago
Hi, I created the segmented_map
as a way around the allocation spikes (when std::vector
is full it first has to allocate a 2x size array, move things over, and then delete the old memory, thus needing 3x more memory). For the bucket index I don't think the segmented vector is of any use, because I need to allocate the memory anways; there is no memory saved. Or is the only reason you want a different allocation behavior because you need smaller chunks? Total memory usage would increase a bit by switching the bucket index from a plain memory to segments
Or is the only reason you want a different allocation behavior because you need smaller chunks
Correct. We operate in an environment where memory fragmentation exists. After the system runs for a while it can be difficult to allocate large (e.g. > 128 KB) chunks of contiguous memory. The segmented map offers us a solution, but it is limited because of the bucket index.
I have created a PR implementing that here: https://github.com/martinus/unordered_dense/pull/112
Let us know what you think.
I have created a PR implementing that here: #112
Let us know what you think.
👋 👋 👋
Is your feature request related to a problem? Please describe.
Sort of. We are exploring the segmented map as means to work-around some constrains we have related to fragmented memory and not being able to allocate large contiguous memory regions.
However, in our experiments we found that it was fairly easy to get >1MB allocation on the bucket index, which exceeds the maximum allocation size we'd like to make.
Describe the solution you'd like
Describe alternatives you've considered
Using a different approach to indexing entirely for our problem.