Open Demi-Marie opened 5 years ago
We need arithmetic to behave like arithmetic, not wrap, etc., but the issues would often arise from outside the arithmetic itself, so audits for overflows sounds broadly useful.
At first blush, I'd expect all values to lie within specific ranges, but one might consider arbitrary but staticly sized arithmetic for some applications, so vaguely like
trait WideningMut {
widening_mul<const A: usize, const A: usize>(a: [u64; A], b: [u64; B]) -> [u64; A+B];
}
along with explicit size assertions that reduce the size. I personally avoid the generic array crate when possible, but it should support doing this now without waiting for const generics.
(@thiolliere ^^)
I do agree that it is more than just nice to have but: 1- requires some dedicated development from someone
2- Don't always agree with:
While there is a performance penalty, I think it is worth it,
In many cases, you can have some assumptions that simply solve the problem with no performance penalty.
T
is bounded by one of the primitives types and the operations that you want to apply to it you can most often use the double size of T
and get away with it.To further support both: I think making the an assumption along the line of "ok this will work for X years and then we can think about it via a runtime upgrade" is also okay with a reasonable X
.
Using fixed-precision integers in the runtime is a massive footgun, and requires audits for overflows and other bad behavior. Using arbitrary-precision integers avoids these problems. While there is a performance penalty, I think it is worth it, especially if we implement bignums as downcalls to native code.