kibertoad / toad-cache

In-memory cache for Node.js and browser
MIT License
17 stars 5 forks source link

make Map the default #28

Closed gurgunday closed 1 year ago

gurgunday commented 1 year ago

Hey, happy user here

I'm interested in centralizing all Lru and Fifo packages to yours within the Fastify ecosystem

Just wanted to ask if you would be interested in making the Map versions of the caches the default named exports (Lru and Fifo)

In recent versions of V8, Map essentially always outperforms Object when it comes to read and write speeds

Now, after seeing the perf difference, you might say, "This is the difference? So what?", but let me point out a property of Objects that makes them unsuitable for generic Map functionality — you probably know the others but I'll leave this MDN link as well

Even if you have a null prototype, weird stuff can be done:

let lru = new Lru()
lru.set('toString', () => "Would throw but no longer"})
`${lru.items}` // Output: Would throw but no longer

lru.set(Symbol.iterator, 'Breaks `this.items` iterator')

I know that these aren't even possible in most cases, but they still make it clear that Map is simply more consistent at being... a Map!

In my opinion, objets are only superior when their keys are known at creation time

Benchmark (node ^20)

BENCHMARK  100
Object write took 0.060791969299316406
Object read took 0.1952500343322754
Map write took 0.01816701889038086
Map read took 0.01595902442932129
BENCHMARK  1000
Object write took 0.3558340072631836
Object read took 0.10799980163574219
Map write took 0.10654187202453613
Map read took 0.10749983787536621
BENCHMARK  10000
Object write took 3.391292095184326
Object read took 1.8417911529541016
Map write took 1.2463748455047607
Map read took 1.3308749198913574
BENCHMARK  1000000
Object write took 471.0659999847412
Object read took 242.29637503623962
Map write took 291.88587498664856
Map read took 224.64337515830994

Code

function benchmark(TIMES) {  
  console.log("BENCHMARK ", TIMES);

  const object = Object.create(null);

    let start = performance.now();
    for (let i = 0; i < TIMES; ++i) {
        object[`key_${i}`] = 1;
    }

    console.log("Object write took", performance.now() - start);
    start = performance.now();

    let result = 0;

    for (let i = 0; i < TIMES; ++i) {
      result += object[`key_${i}`];
    }

    console.log("Object read took", performance.now() - start);
    start = performance.now();

  const map = new Map();

  for (let i = 0; i < TIMES; ++i) {
    map.set(`key_${i}`, 1);
  }

  console.log("Map write took", performance.now() - start);
  start = performance.now();

  result = 0;

  for (let i = 0; i < TIMES; ++i) {
    result += map.get(`key_${i}`);
  }

  console.log("Map read took", performance.now() - start);

}

benchmark(100);
benchmark(1_000);
benchmark(10_000);
benchmark(1_000_000);
gurgunday commented 1 year ago

I can send a PR too if you're down

kibertoad commented 1 year ago

@gurgunday Interesting. My own benchmarks indicated the opposite, see https://github.com/kibertoad/nodejs-benchmark-tournament/blob/master/cache-get-inmemory/_results/results.md https://github.com/kibertoad/nodejs-benchmark-tournament/blob/master/cache-set-inmemory/_results/results.md

Can you compare our measurement approaches and figure out why they differ in outcomes?

gurgunday commented 1 year ago

I don't see toad-cache-lru-map in the benchmarks, but I did a test after adding it and the results are the following:

cache-get-inmemory

{ cpu: { brand: 'M1', speed: '2.40 GHz' } }
| Node   | Option                     | Msecs/op       | Ops/sec  | V8                    |
| ------ | -------------------------- | -------------- | -------- | --------------------- |
| 20.8.0 | layered-loader-fifo-object | 0.191705 msecs | 5216.338 | V8 11.3.244.8-node.16 |
| 20.8.0 | layered-loader-fifo-map    | 0.205043 msecs | 4877.017 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru-map         | 0.207531 msecs | 4818.554 | V8 11.3.244.8-node.16 |
| 20.8.0 | layered-loader-lru-object  | 0.212746 msecs | 4700.438 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru             | 0.216082 msecs | 4627.863 | V8 11.3.244.8-node.16 |
| 20.8.0 | layered-loader-lru-map     | 0.232885 msecs | 4293.968 | V8 11.3.244.8-node.16 |
| 20.8.0 | tiny-lru                   | 0.237502 msecs | 4210.496 | V8 11.3.244.8-node.16 |
| 20.8.0 | dataloader                 | 1.335993 msecs | 748.507  | V8 11.3.244.8-node.16 |
| 20.8.0 | async-cache-dedupe         | 2.366963 msecs | 422.482  | V8 11.3.244.8-node.16 |

However, I don't trust this one since I'm using battery power right now, I will redo it when I get back home

gurgunday commented 1 year ago

Seems like the good ol' Object is quite a bit faster when it comes to set performance:

cache-set-inmemory

{ cpu: { brand: 'M1', speed: '2.40 GHz' } }
| Node   | Option             | Msecs/op       | Ops/sec  | V8                    |
| ------ | ------------------ | -------------- | -------- | --------------------- |
| 20.8.0 | toad-cache-lru     | 0.224106 msecs | 4462.169 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru-map | 0.293277 msecs | 3409.748 | V8 11.3.244.8-node.16 |

cache-get-inmemory

{ cpu: { brand: 'M1', speed: '2.40 GHz' } }
| Node   | Option             | Msecs/op       | Ops/sec  | V8                    |
| ------ | ------------------ | -------------- | -------- | --------------------- |
| 20.8.0 | toad-cache-lru-map | 0.194286 msecs | 5147.056 | V8 11.3.244.8-node.16 |
| 20.8.0 | toad-cache-lru     | 0.204919 msecs | 4879.984 | V8 11.3.244.8-node.16 |

I would've preferred to see the opposite, but Map is still not there 🤣

kibertoad commented 1 year ago

@gurgunday can you create a PR with new benchmarks, btw? they would be useful for the future

gurgunday commented 1 year ago

Yeah, I just did 😁

https://github.com/kibertoad/nodejs-benchmark-tournament/pull/9