Open cristianbote opened 3 months ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
goober-rocks | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Aug 26, 2024 6:56am |
This pull request is automatically built and testable in CodeSandbox.
To see build info of the built libraries, click here or the icon next to each commit SHA.
Size Change: 0 B
Total Size: 3.84 kB
I'm exploring the disassembly of xxh
on V8, it looks like it's doing a lot of validation and doesn't have confidence yet that it's indeed a Smi value despite all the existing asm.js annotations, so I went overboard and added them everywhere
It does seem to help all engines:
xxh
and xxh_unmodified
in this commit, I'll just keep the optimized version to continue the exploration.
I've also established in https://github.com/emotion-js/emotion/pull/3241#issuecomment-2308932268 that the final string formatting function can have a big impact, so goober doing a simple 'go' + number
might be more efficient than xxh
doing .toString(36)
.
I need to explore more before giving a better answer but I'll include the updated goober version in my benchmarks, though the collision rate makes it a bad candidate for emotion. I'll try to figure if I can find a better algo for your requirements (super low byte size).
Note that you don't return anything from the goober hash functions in your tests, that might also impact the result (engines could ellicit an unused constant value with no side-effect). Also your benchmark has a single constant string as dataset instead of a proper parameter, that's another factor that could be used for optimization by the engines. It would be better to build a representative datasets of strings you see in production, and run the perf tests on that dataset.
For the record, here is a confirmation that 'go' + number
(decimal formatting) is very efficient on v8. No significant difference in other engines:
Hello. I slightly modified the code to make xxh32 suitable for benchmarking, and obtained the following results:
# Node (v22.6.0)
Starting HASH!
▸ twind HASH x 3,099,745 ops/sec ±0.40% (100 runs sampled)
▸ goober original HASH x 5,780,316 ops/sec ±0.44% (95 runs sampled)
▸ goober optimized HASH x 14,695,271 ops/sec ±0.67% (95 runs sampled)
▸ goober optimized HASH with ASM hints x 14,728,415 ops/sec ±0.54% (95 runs sampled)
▸ xxh32 x 17,213,817 ops/sec ±0.31% (98 runs sampled)
Fastest is: xxh32
# Bun (1.1.34)
Starting HASH!
▸ twind HASH x 8,288,343 ops/sec ±0.74% (94 runs sampled)
▸ goober original HASH x 1,672,256 ops/sec ±0.17% (98 runs sampled)
▸ goober optimized HASH x 2,075,684 ops/sec ±0.96% (93 runs sampled)
▸ goober optimized HASH with ASM hints x 10,446,495 ops/sec ±0.90% (93 runs sampled)
▸ xxh32 x 25,015,040 ops/sec ±1.87% (86 runs sampled)
Fastest is: xxh32
Actually, considering the bundle size of goober, using xxh32 is overkill. I just wrote this because there was talk about my library. :)
This is related and in response of https://github.com/emotion-js/emotion/pull/3241
cc @romgrk
The weird part -- but kinda expected -- was that on node
18.17.0
goober is constantly second but then drops with newer versions.Besides that it really is peculiar what is happening on
bun
that runs so slow?!Anyway, I've copy/pasted the xxh on goober's hash perf test, and these are the results:
and
Looking forward to your thoughts