Open Yomguithereal opened 5 years ago
thanks! i'll look into tiny-lru this evening.
@Yomguithereal great find! tiny-lru
was totally cheating due to a bad line of code I wrote on a really long flight.
Re-opening due to the auto-close of a merge referencing the issue.
Should we withdraw lru_cache
until it is fixed? I think I'll try to submit a PR on the lib's repo if I can find some time.
¯\_(ツ)_/¯
could be legit; i haven't read the code. in #20 we've found interesting problem with mkc
such that it can't complete a more random benchmark. seems a little nondeterministic.
lru_cache
is definitely leaking memory and not deleting keys when it should. Will try a PR to fix the issue soon.
Related PR opened here.
Some implementations are definetely buggy. Some implementations also offer more features. For example, some have a maxAge expiration feature that may add some overhead Different versions of the same package have also very different results.
I suggest to:
There is also a problem with the benchmark algorithm. But I will write about it in another issue
Funny thing about the hardware & software... it's nearly impossible to get a consistent run without an isolated machine solely for this test. Comparing intel to intel cpus is pretty consistent, but windows to a *nix is really different, and intel to amd is really different, and even just multiple back to back can be significantly different based on external factors present on the machine (see https://github.com/dominictarr/bench-lru/pull/28).
The benchmark has been updated to minimize bottlenecking while timing things, and the results are significantly cleaner now.
Hello @dominictarr and @avoidwork. By running some tests today I think I stumbled upon some issues with the tiny-lru and lru_cache implementations that may explain why they are wildly better at evicts than the other libraries.
If you replace the worker's code by this one (which is the same as the current one but with some more information logged about the beforementioned libraries):
It will log the following:
They both seem, at the end of the benchmark, to have an actual index size which is exactly twice the desired capacity
2e5
(200000
).What's more, if you log the number of times they actually delete from the cache object (using the
delete
keyword) you'll find thatlru_cache
deletes way less than other libs and thattiny-lru
only actually deletes once. This is why they seem way faster.For
lru_cache
I think the erroneous branch is this one:Because the size is decreased when shifting but not increased back there is a capacity issue that lets the index grow in memory.
For
tiny-lru
I am unsure where the issue seems to be but I can investigate further if you want.