llvm / llvm-project

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
http://llvm.org
Other
28.98k stars 11.94k forks source link

APFloat's fusedMultiplyAdd gives an incorrectly rounded result for IEEEfloat #104984

Open sizn-sc opened 2 months ago

sizn-sc commented 2 months ago

While comparing APFloat against berkeley-softfloat-3e I found a discrepancy in fusedMultiplyAdd in a particular corner-case:

#include <cmath>
#include <iostream>
#include <bit>
#include <cstdint>

int main()
{
    static_assert(sizeof(float) == 4);
    auto a = 0.24999998f;
    auto b = 2.3509885e-38f;
    auto c = -1e-45f;
    auto d = std::fmaf(a, b, c);
    // Clang with optimizations folds d to 3ffffe, without optimizations 3fffff. 
    std::cout << std::hex << std::bit_cast<uint32_t>(d) << "\n";
}

Reproduction available at compiler explorer. This occurs for NearestTiesToEven and NearestTiesToAway rounding modes. Originally this issue was discovered by linking against LLVMSupport and using APFloat directly, but it also affects constant folding with default rounding mode.

GCC and native x86 FPU seem to agree with clang without optimizations. This is likely a case of incorrect rounding and is unrelated to https://github.com/llvm/llvm-project/issues/63895.

(cc @eddyb @beetrees )

beetrees commented 2 months ago

I've checked and this will be fixed by #98721.

sizn-sc commented 2 months ago

I've also found input data for half and double precision FMA, which seemingly trigger the same bug (off by 1 ulp error):

Since this problem is fixed by your PR, then maybe these cases can be added to regression as well?