Closed extremeheat closed 3 years ago
linting not passing
Sure, I'm not aware of any breaking changes here, the existing logic remains the same with the ability to serialize BigInt with the native support in Buffer. If BigInt isn't supported (node <10) the typeof matches should fail. I'll look at this a bit more in the weekend, but I currently depend on some of this functionality for some of my projects through pnbt, so the options to handle read/write 64-bit integers comes down to 1) keep arrays and ignore the upper 32 bits or 2) do conversion back and forth. What changes do you think are needed?
Tried using the new BigIntExt method from @Saiv46 , but this requires the node.js engine to be updated to 10.4. Also, standardjs fails because the version of standard the project is using (12) is incompatible with node 10.4. Updating standard to the latest version would change alot of the code, and should probably be done outside this PR, so that's the major blocker here.
Updated the PR to use a fixed version of BigIntExt, looks like protodef expects signed integers. Also the isLE check is not needed as it's always big endian when shifting, it would only be needed to read from the stream. Until the standardjs issue gets fixed I added it to the lint ignore list so the tests pass.
can you please explain what you did with bigint ? This looks breaking to me.
Fixed valueOf and simplified it a bit further. I don't think there should be any breaking changes, though it did look more complicated than it needed to.
Basically it returns an array that has a primitive value of a BigInt so when you do <>+-=/%, etc it acts as a BigInt, but it's still an array so array accessing/modifications/serialization still work
class BigIntExtended extends Array {
static fromInt(arg) { // just so this can be created from a number/bigint
if (typeof arg === 'number') arg = BigInt(arg)
// converts bigint into array:
return new BigIntExtended(Number(BigInt.asIntN(32, arg >> 32n)), Number(BigInt.asIntN(32, arg)))
}
// creates a bigint from array:
valueOf() { return BigInt.asIntN(64, BigInt(this[0]) << 32n) | BigInt.asUintN(32, BigInt(this[1])) }
}
// usage
var be = BigIntExtended.fromInt(-22)
console.log(be + 64n) // ✔ 42n
console.log(be, be[0], be[1]) // BigIntExtended(2) [-1, -22] -1 -22
console.log(be > -30n) // true
be[1] += 64 // lower half becomes positive
console.log(be +0n) // -4294967254n (coerced to a bigint)
console.log(be > -30n) // false
be[0] += 1
JSON.stringify(be) // "[0,42]"
be.toString() // "0,42"
be.valueOf() // 42n
It seems the ci is not running, I'll fix it after this PR. Can you run npm test ? And can you run npm run benchmark before/after this PR ?
All the tests are passing,
Benchmark on big:
> mocha benchmark/benchmark_unified.js
read/write x 1,937 ops/sec ±18.61% (70 runs sampled)
√ read/write (5698ms)
read/write (compiled) x 11,513 ops/sec ±5.66% (72 runs sampled)
√ read/write (compiled) (5690ms)
2 passing (11s)
on master:
> mocha benchmark/benchmark_unified.js
read/write x 2,085 ops/sec ±14.73% (72 runs sampled)
√ read/write (5640ms)
read/write (compiled) x 11,117 ops/sec ±13.91% (70 runs sampled)
√ read/write (compiled) (5528ms)
2 passing (11s)
gitpod big:
> protodef@1.8.3 benchmark /workspace/node-protodef
> mocha benchmark/benchmark_unified.js
read/write x 5,982 ops/sec ±5.10% (78 runs sampled)
✓ read/write (5717ms)
read/write (compiled) x 25,757 ops/sec ±2.30% (86 runs sampled)
✓ read/write (compiled) (5736ms)
2 passing (11s)
master:
> protodef@1.8.3 benchmark /workspace/node-protodef
> mocha benchmark/benchmark_unified.js
read/write x 6,555 ops/sec ±3.62% (80 runs sampled)
✓ read/write (5730ms)
read/write (compiled) x 26,793 ops/sec ±2.65% (88 runs sampled)
✓ read/write (compiled) (5802ms)
2 passing (12s)
removed the extra code
BigInt writing support, and support reading from buffer offset
If you're reading from a stream of data, it's easier and more efficient to increment offset position than make a new buffer slice every time.