Closed extemporalgenome closed 10 months ago
Thanks for the great write-up. I like your suggested solution and will resolve this issue in the next release.
The first alternative is also appealing but I'm reluctant to risk breaking compatibility (unless it is to fix conformance to RFC 8949).
Thanks!
Is your feature request related to a problem? Please describe.
The duality of decoding into int64/uint64 when given an empty interface has great affinity to the CBOR data model, but in applications that predominantly model integers as int64, yet still want access to arbitrary precision integers,
IntDecConvertSigned
still leaves an awkward gap (at least from what I can infer from the documentation):[math.MinInt64, math.MaxInt64]
succeed with anint64
.(math.MaxInt64, math.MaxUint64]
fail withUnmarshalTypeError
.math.MinInt64
or greater thanmath.MaxUint64
succeed with a*big.Int
.Describe the solution you'd like
An additional
IntDecMode
which behaves likeIntDecConvertSigned
, except that in any case where overflowing values yield anUnmarshalTypeError
, they instead succeed with a*big.Int
.Describe alternatives you've considered
IntDecConvertSigned
could be amended to have the behavior described above if it doesn't cause compatibility breakage. Since it doesn't appear that*big.Int
support can be disabled, thus decode intoany
with arbitrary data could yield*big.Int
, it might be rare that a compatibility breakage would arise in practice, given that the calling code would need to tolerate receiving big integers.Using
IntDecConvertNone
as is. This does mean that, for cases in which nearly all integer data fits inint64
range, the calling code must deal withint64
,uint64
, and*big.Int
, as well as potentially overflow-prone conversions from uint64 to int64. For such an application, this is less desirable than merely dealing withint64
and*big.Int
data.