JuliaLinearAlgebra / libblastrampoline

Using PLT trampolines to provide a BLAS and LAPACK demuxing library.
MIT License
66 stars 17 forks source link

Inconsistent rounding with BLAS #106

Closed orkolorko closed 1 year ago

orkolorko commented 1 year ago

I open with issue on libblastrampoline since it seems to be BLAS independent. I'm working with @lucaferranti to ensure that IntervalLinearAlgebra rigorous enclosure of matrix product is working as expected, and we used a test from Theveny P. - Numerical Quality and High Performance In Interval Linear Algebra on Multi-Core Processors

We start by setting the Rounding Mode Up for Float64 by using llvm intrinsics (we can use the Julia interface which is equivalent, but this seems to be more stable on different platforms).

ccall("llvm.set.rounding", llvmcall, Cvoid, (Int32, ), 2)

We take the vector v=[1.0; 2^(-53)]; then BLAS.dot(v, v) == nextfloat(1.0) is correctly rounded up.

Let now A be the matrix A = [1.0 2^(-53); 0 2^(-53)] then we would expect B = BLAS.gemm('N', 'T', 1.0, A, A) to be such that B[1, 1] == nextfloat(1.0), but this returns false.

So, it seems that the Rounding Mode context is passed forward in the dot operation but not in the gemm function. We tested this both with multithreaded BLAS and single threaded, and tried with OpenBLAS, OpenBLAS compiled with the CONSISTENT_FPCSR flag enabled (consistent rounding mode over all threads), MKL and BLISBLAS, but this unexpected behavior seems to be independent of the BLAS package.

Do you have any idea why this happens? Is it something that is expected by design of the BLAS interface or is it a bug?

Comment

Further tests showed that the behavior seems that BLAS is behaving as expected; probably the former tests had some error in setting the round mode. Closed.