blitzcode / lru-bounded-map

Haskell implementations of associative containers (maps / dictionaries) which retire elements in least recently used (LRU) order when growing past a specified limit
3 stars 3 forks source link

Did you try psqueues? #1

Open qrilka opened 8 years ago

qrilka commented 8 years ago

It appears to be more performant than lrucache - I hope I could get some time creating a PR with psqueues-based cache in it, but probably you have done it already?

blitzcode commented 8 years ago

lrucache uses pretty much the most naive implementation of this container without doing something silly, it's to be expected that psqueues are faster. I did not try psqueues myself. I vaguely (I might be wrong here) remember some interaction with the author of psqueues a while ago, I don't think it's actually faster than the fastest Trie implemented here. I personally also stopped looking into better solutions - for my use case the best solution I have here made it drop from the profile, no more improvement was needed.

qrilka commented 8 years ago

Thanks for the answer. For our task LRU cache is not critical but I'm still curious what will be the difference :) I'll try to find some time to create that PR.

blitzcode commented 8 years ago

Sure, it's a really interesting data structure problem!

When I ported to a current GHC / 64bit I noticed some performance regressions, didn't really investigate. Probably just the 64bit change, though. It's probably possible to get some of the performance back. I originally designed for 32bit Ints and did some work-arounds for that.

It might also be possible to make the HAMT approach work, never finished that implementation after some initially disappointing results.

For best performance, an actual mutable data structure implementing what the map based ones are emulating in a functional manner would likely be unbeatable.