Closed josephg closed 4 years ago
For posterity, I've implemented (1). I think its the only real choice:
decode(encode(1n)) === 1
. Which is a caveat I can live with. (You're already buying in to using bigints with the tuple encoder. That behavior is better than the current behavior in which it throws an exception.)
The tuple encoding semi-officially supports large integers (up to 255 bytes in length). Arbitrary precision integers are implemented in the python and java bindings - although apparently the encodings are subtly different :/
Anyway, we should map these across to bigints. Its a bit awkward because javascript considers the numbers
10
and10n
to be different, but the tuple encoding seems to consider them as just part of a sliding continuum.There's an implementation decision here. We could:
tuple(1n) > tuple(10)
andtuple(-1n) < tuple(-10)
. We would also need another encoding for zero (0n
) - like maybe a zero-length arbitrary precision positive integer? Anyway, this feels gross and hacky.I think the most sensible answer is (1) - though I might put the behaviour behind a flag, and if that flag isn't set continue to disallow any unsafe integer from being encoded.