In certain cases (e.g. in exp_on_negative_values, via Rescale<0>), we can pass a negative signed value to the ShiftLeft function, which we then left shift. This is technically undefined behaviour in C++, and this UB is detected by modern versions of UBSAN, which is unfortunate. For the reference, see Section 5.8 Shift Operators
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 × 2^E2, reduced modulo one more than the maximum value representable in the result type. Otherwise, if E1 has a signed type and non-negative value, and E1 × 2^E2 is representable in the corresponding unsigned type of the result type, then that value, converted to the result type, is the resulting value; otherwise, the behavior is undefined
It's pretty trivial to fix the UB without any performance degradation, as modern compilers generate identical code for a * (1 << shift) as a << shift, with the advantage the the former doesn't invoke UB when a is a signed negative value. See this godbolt session for details and to play (you can also add -fsanitize=shift-base), but Clang 5.0 on x86-64 generates:
In certain cases (e.g. in
exp_on_negative_values
, viaRescale<0>
), we can pass a negative signed value to theShiftLeft
function, which we then left shift. This is technically undefined behaviour in C++, and this UB is detected by modern versions of UBSAN, which is unfortunate. For the reference, see Section 5.8 Shift OperatorsIt's pretty trivial to fix the UB without any performance degradation, as modern compilers generate identical code for
a * (1 << shift)
asa << shift
, with the advantage the the former doesn't invoke UB whena
is a signed negative value. See this godbolt session for details and to play (you can also add-fsanitize=shift-base
), but Clang 5.0 on x86-64 generates:in both cases, and GCC on ARM and ARM64 generates:
in both cases as well.
I believe we have a CLA already, but LMK.