jeromefroe / lru-rs

An implementation of a LRU cache
https://docs.rs/lru/
MIT License
644 stars 106 forks source link

Generic Improvements #169

Open jonhartnett opened 1 year ago

jonhartnett commented 1 year ago

Disclaimer: This PR is kind of big. I recommend reviewing it patch-by-patch instead of all at once, which will make the individual atomic changes a lot easier to follow.

This PR is the result of a bunch of improvements that I made on my local fork that I thought I'd share upstream. Incidentally, they happen to solve many of the open issues. All changes have associated documentation and were validated against the existing tests + doc tests. New tests and doc tests were introduced where appropriate to validate new features.

Overview:

  1. Reduce memory usage by merging head + tail and switching internally from a HashMap to a HashSet.
  2. Eliminate eager allocation of the root node(s) on new
  3. Allow 0 as a capacity limit, which is a nice-to-have use case, but mainly simplifies much of the api/docs
  4. Add an Entry api for easier + more efficient transformations that don't fit into one of the main accessor functions. Also simplifies the internals, which makes it easier to reason about the safety of unsafe blocks.
  5. Add a Limiter api for limiting the cache size based on metrics other than len (e.g. memory usage)

I realize this is a lot to review on one go, so if you have issues with any of the individual commits, lmk and I can resubmit as individual PRs.

jeromefroe commented 1 year ago

@jonhartnett I'm extremely sorry for the delay following up here. I had meant to respond when the issue was first opened, but I forgot to do so promptly and ended up completely losing track of this 🤦. If it's possible to break these changes into multiple smaller PR's that would greatly help me review the changes 😅