Closed rokoroku closed 1 year ago
Merging #25 into master will not change coverage. The diff coverage is
100.00%
.
@@ Coverage Diff @@
## master #25 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 1 1
Lines 22 22
=========================================
Hits 22 22
Impacted Files | Coverage Δ | |
---|---|---|
src/index.js | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 400b848...79ddb1b. Read the comment docs.
Reuse y
sounds like a bad idea. However, the type (string
) doesn't change so I guess v8 could handle this.
Why can't we merge this PR @lukeed?
Sorry for the super late reply here. I was putting off benchmarking this properly because I had strong suspicions that it wasn't actually faster. I remember initially having a version of clsx
that reused the value, but didn't keep it for the same findings. There's some internal deopt that happens (or more likely, an optimization that doesnt happen) when the typeof
isn't run directly ahead of value handling.
Here are the latest results on Node 18.12.1
, keeping only the clsx
rows for brevity:
The
clsx
entry is currentmaster
branch code – andclsx (reuse)
is this PR as written
# Strings
clsx (reuse) x 12,763,491 ops/sec ±0.17% (99 runs sampled)
clsx x 14,270,189 ops/sec ±0.17% (96 runs sampled)
# Objects
clsx (reuse) x 10,240,638 ops/sec ±0.14% (97 runs sampled)
clsx x 10,443,788 ops/sec ±0.25% (96 runs sampled)
# Arrays
clsx (reuse) x 9,524,665 ops/sec ±0.14% (100 runs sampled)
clsx x 9,957,274 ops/sec ±0.42% (97 runs sampled)
# Nested Arrays
clsx (reuse) x 7,621,805 ops/sec ±0.32% (96 runs sampled)
clsx x 8,187,280 ops/sec ±0.24% (101 runs sampled)
# Nested Arrays w/ Objects
clsx (reuse) x 8,007,407 ops/sec ±0.20% (94 runs sampled)
clsx x 8,387,817 ops/sec ±0.25% (99 runs sampled)
# Mixed
clsx (reuse) x 8,245,897 ops/sec ±0.27% (96 runs sampled)
clsx x 8,957,557 ops/sec ±0.26% (95 runs sampled)
# Mixed (Bad Data)
clsx (reuse) x 3,536,745 ops/sec ±0.11% (97 runs sampled)
clsx x 3,676,780 ops/sec ±0.17% (99 runs sampled)
The current release is faster on every benchmark.
And to remove suspicions that clsx
was using clsx (reuse)
as a warmup, I reran w/ the order flipped:
# Strings
clsx x 14,297,355 ops/sec ±0.16% (98 runs sampled)
clsx (reuse) x 12,779,103 ops/sec ±0.17% (98 runs sampled)
# Objects
clsx x 10,510,706 ops/sec ±0.17% (102 runs sampled)
clsx (reuse) x 9,966,354 ops/sec ±0.50% (98 runs sampled)
# Arrays
clsx x 10,024,272 ops/sec ±0.27% (100 runs sampled)
clsx (reuse) x 9,099,094 ops/sec ±0.25% (96 runs sampled)
# Nested Arrays
clsx x 8,413,097 ops/sec ±0.43% (100 runs sampled)
clsx (reuse) x 7,556,043 ops/sec ±0.20% (97 runs sampled)
# Nested Arrays w/ Objects
clsx x 8,451,890 ops/sec ±0.27% (98 runs sampled)
clsx (reuse) x 8,079,092 ops/sec ±0.25% (96 runs sampled)
# Mixed
clsx x 9,070,077 ops/sec ±0.29% (97 runs sampled)
clsx (reuse) x 8,266,743 ops/sec ±0.23% (95 runs sampled)
# Mixed (Bad Data)
clsx x 3,737,597 ops/sec ±0.28% (97 runs sampled)
clsx (reuse) x 3,576,834 ops/sec ±0.19% (98 runs sampled)
The results are the same & more clearly demonstrate the de-optimization / missing V8 optimization that's occurring.
Thank you for the PR though! Sorry again for the delay, but I'm glad that I finally got some numbers to solidify my hesitations here.
Hi, I think this small change improves the performance slightly ;)