Closed someview closed 5 months ago
Such an approach won't release memory if the map grows to a few million entries and then all but 100 entries are deleted. The current code will be shrinking the map occasionally.
Such an approach won't release memory if the map grows to a few million entries and then all but 100 entries are deleted. The current code will be shrinking the map occasionally.
We have met the case: for server, a stateful client store all topics comes from the conn. a gobal concurrent topic-client hashmap When we shutdown a stateful client , we would clear its resouces(a lot of topic key would be deleted from the global map). Such a sudden deletion of a large amount of data may cause performance jitter.
I see. For such use cases, the map could be marked grow-only, so it never shrinks. This would be enabled through a new constructor function, say, NewGrowOnlyMapOf
. WDYT?
Good idea. It seems we that should reuse the memory space when a kvpair would be updated.
Good to know. I'll add these constructors - hopefully this weekend.
I've published v3.2.0, so you can now configure a map to be grow-only:
m := xsync.NewMapOf[int, int](WithGrowOnly())
Closing this issue for now as the feature is shipped. Let me know if you face any problems.
Now mapof delete a key using code like this:
This may cause greatly late when delete operation is frequent. How about consider this instead of the empty percentage of buckets as condition.