Closed schwehr closed 6 years ago
I have seen this as well when running ubsan. A similar thing happens when decoding signed ints. I will see how to fix this without taking too big of a speed hit.
Can you describe in prose what that function is doing? Is there a precondition check that can be done? Or Maybe there is an alternative approach that isn't susceptible to overflow? Can it be converted to unsigned math or can it switch to 64 bit ints? Often 64 bit math is just as fast. This begs the question of setting up some microbenchmarks to make it easier to check patches for large performance regressions. But, from my perspective, avoiding UB wins out over performance if there isn't an alternative that is fast.
It is the implementation of preprocessing as described in section 4 of the standard. It maps the difference to the previous sample to an unsigned int.
Doing everything in 64 bit helps but had a very noticeable speed impact last time I checked. This is easily the hottest function for encoding. make bench
will show that but you have to switch to signed encoding (aec -s ...
in benc.sh
) for this case.
That being said, I totally agree with you that UB should be avoided. Even though make test
will fail if the UB is not what we expect. I will have a look at it next week :smiley:
Happens in here:
I'm not sure how an overflow situation should be handled. I'm guessing that there was an assumption at a higher level that was violated. If not,
INT32_MAX
andINT32_MIN
are useful to create safe math functions that protect against signed overflow (which is very definitely undefined behavior in C).https://en.cppreference.com/w/c/types/integer