Closed gurgunday closed 1 year ago
I can send a PR too if you're down
@gurgunday Interesting. My own benchmarks indicated the opposite, see https://github.com/kibertoad/nodejs-benchmark-tournament/blob/master/cache-get-inmemory/_results/results.md https://github.com/kibertoad/nodejs-benchmark-tournament/blob/master/cache-set-inmemory/_results/results.md
Can you compare our measurement approaches and figure out why they differ in outcomes?
I don't see toad-cache-lru-map
in the benchmarks, but I did a test after adding it and the results are the following:
cache-get-inmemory
{ cpu: { brand: 'M1', speed: '2.40 GHz' } }
| Node | Option | Msecs/op | Ops/sec | V8 |
| ------ | -------------------------- | -------------- | -------- | --------------------- |
| 20.8.0 | layered-loader-fifo-object | 0.191705 msecs | 5216.338 | V8 11.3.244.8-node.16 |
| 20.8.0 | layered-loader-fifo-map | 0.205043 msecs | 4877.017 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru-map | 0.207531 msecs | 4818.554 | V8 11.3.244.8-node.16 |
| 20.8.0 | layered-loader-lru-object | 0.212746 msecs | 4700.438 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru | 0.216082 msecs | 4627.863 | V8 11.3.244.8-node.16 |
| 20.8.0 | layered-loader-lru-map | 0.232885 msecs | 4293.968 | V8 11.3.244.8-node.16 |
| 20.8.0 | tiny-lru | 0.237502 msecs | 4210.496 | V8 11.3.244.8-node.16 |
| 20.8.0 | dataloader | 1.335993 msecs | 748.507 | V8 11.3.244.8-node.16 |
| 20.8.0 | async-cache-dedupe | 2.366963 msecs | 422.482 | V8 11.3.244.8-node.16 |
However, I don't trust this one since I'm using battery power right now, I will redo it when I get back home
Seems like the good ol' Object is quite a bit faster when it comes to set
performance:
cache-set-inmemory
{ cpu: { brand: 'M1', speed: '2.40 GHz' } }
| Node | Option | Msecs/op | Ops/sec | V8 |
| ------ | ------------------ | -------------- | -------- | --------------------- |
| 20.8.0 | toad-cache-lru | 0.224106 msecs | 4462.169 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru-map | 0.293277 msecs | 3409.748 | V8 11.3.244.8-node.16 |
cache-get-inmemory
{ cpu: { brand: 'M1', speed: '2.40 GHz' } }
| Node | Option | Msecs/op | Ops/sec | V8 |
| ------ | ------------------ | -------------- | -------- | --------------------- |
| 20.8.0 | toad-cache-lru-map | 0.194286 msecs | 5147.056 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru | 0.204919 msecs | 4879.984 | V8 11.3.244.8-node.16 |
I would've preferred to see the opposite, but Map is still not there 🤣
@gurgunday can you create a PR with new benchmarks, btw? they would be useful for the future
Yeah, I just did 😁
https://github.com/kibertoad/nodejs-benchmark-tournament/pull/9
Hey, happy user here
I'm interested in centralizing all Lru and Fifo packages to yours within the Fastify ecosystem
Just wanted to ask if you would be interested in making the Map versions of the caches the default named exports (Lru and Fifo)
In recent versions of V8, Map essentially always outperforms Object when it comes to read and write speeds
Now, after seeing the perf difference, you might say, "This is the difference? So what?", but let me point out a property of Objects that makes them unsuitable for generic Map functionality — you probably know the others but I'll leave this MDN link as well
Even if you have a null prototype, weird stuff can be done:
I know that these aren't even possible in most cases, but they still make it clear that Map is simply more consistent at being... a Map!
In my opinion, objets are only superior when their keys are known at creation time
Benchmark (node ^20)
Code