Open marmeladema opened 4 years ago
Hi @marmeladema! It's no bother at all!
I just took a look at your crate, and I think it's a great approach! I'd be happy to add a comment to the README linking to it and explaining the differences between the two. The way I see it, the two crates occupy slightly different positions on the spectrum of performance and usability. clru-rs
offers an API closer to HashMap
and less unsafe code at the cost of slightly more memory usage because the keys are stored in a vector instead of a linked list, and this crate makes the opposite trade-off. Does that sound about right?
Thank you for taking the time to look at the code! Really appreciate it!
I'd be happy to add a comment to the README linking to it and explaining the differences between the two.
Wow that would be super nice :+1: Maybe it's a bit too soon though, I'd like some more people to look at it / use it.
at the cost of slightly more memory usage because the keys are stored in a vector instead of a linked list
The keys are moved in a Rc
and thus are shallow copy in the linked list. The overhead with this would be similar as if the keys were boxed once. I am not sure there is such a difference in memory usage between the two crates? It would be nice to measure it though! The only thing that I can think of where clru
might be more memory hungry would be for non-full cache, because it pre-allocates all nodes ahead of time but then there is not much point in using an LRU cache that is never full.
Maybe it's a bit too soon though, I'd like some more people to look at it / use it.
Sounds good. We can circle back whenever you feel it's ready!
The only thing that I can think of where clru might be more memory hungry would be for non-full cache, because it pre-allocates all nodes ahead of time but then there is not much point in using an LRU cache that is never full.
That was all I could think of too. And I agree, it certainly seems like the difference would be negligible.
By the way, I haven't tagged a new release yet, but clru
does not pre-allocate unconditionally all the memory anymore.
There is still a constructor available to do so though, for cases (like the ones I have) where it can be known in advance that the cache is going to be full.
Hello @jeromefroe and sorry to bother you!
I have been looking for an LRU cache with O(1) operations and found your crate but decided as an exercise to implement my own: https://github.com/marmeladema/clru-rs
I wonder if you could take a quick look and tell me what you think of it?
As outlined in the README, the main differences are:
HashMap
collection which allows to lookup withBorrow
-ed version of the key.Technically, it might address some issues opened in this crate, while certainly introducing a lot new ones :-) Namely:
72: only the key are put into an
Rc
and nothing else is boxed85: the API being similar to
HashMap
it should be possible to use a&str
to fetch a value with aString
keyOn the implementation side, it relies on a regular
HashMap
and a linked-list stored in a vector.I understand its quite an unusual way to contact people, please forgive me.
Thank you in advance!