Open jawj opened 2 weeks ago
Thinking about it, we can probably limit the extra memory needed by doing it in chunks ...
I've packaged this code up on npm: https://www.npmjs.com/package/hextreme. For multi-megabyte arrays it can be many times faster than the approach currently taken here.
Are you willing to consider a PR to use this package for .toString('hex')
?
I think we reasonably track Node LTS for the Buffer
API, and with IE officially out of support - I'm happy to say we that we could drop support for that in respect of code used.
Thanks. I actually added a fallback implementation where there's no TextDecoder
anyway, which is still slightly faster than the current approach because it does a bit of loop unrolling.
I also limited the biggest possible temporary buffer allocation to 1MB, by doing the conversion in chunks, which makes very little difference in speed.
I'll go ahead and submit a PR in the next few days then.
I've actually just applied the same technique to base64 encoding, so I might submit another PR for that at some point.
In a similar vein to #245, I have an implementation of
toString('hex')
that's between 2x and 4x faster than the existing one, depending on the browser/engine.See: https://jsbench.me/evm3ejel2i/3
Possible drawbacks of this implementation:
TextDecoder
(does this package support IE currently?)Uint16Array
to hold intermediate dataWould you accept a PR to switch to this implementation?