SciML / LinearSolve.jl

LinearSolve.jl: High-Performance Unified Interface for Linear Solvers in Julia. Easily switch between factorization and Krylov methods, add preconditioners, and all in one interface.
https://docs.sciml.ai/LinearSolve/stable/
Other
244 stars 52 forks source link

Setup Accelerate and MKL for 32-bit, MKL getrf, and Metal.jl integration #361

Closed ChrisRackauckas closed 1 year ago

ChrisRackauckas commented 1 year ago

Kind of a lot in a single PR but this is what I ended up with after a flight

codecov[bot] commented 1 year ago

Codecov Report

Merging #361 (c3a93d9) into main (9224384) will decrease coverage by 3.88%. The diff coverage is 1.31%.

@@            Coverage Diff             @@
##             main     #361      +/-   ##
==========================================
- Coverage   73.83%   69.95%   -3.88%     
==========================================
  Files          19       20       +1     
  Lines        1353     1428      +75     
==========================================
  Hits          999      999              
- Misses        354      429      +75     
Files Changed Coverage Δ
ext/LinearSolveMKLExt.jl 38.09% <0.00%> (-54.22%) :arrow_down:
ext/LinearSolveMetalExt.jl 0.00% <0.00%> (ø)
src/LinearSolve.jl 97.29% <ø> (ø)
src/appleaccelerate.jl 5.06% <0.00%> (-2.21%) :arrow_down:
src/extension_algs.jl 70.83% <100.00%> (ø)

:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more

christiangnrd commented 1 year ago

Tried to run the Metal benchmark and I'm getting this error:

ERROR: MethodError: no method matching do_factorization(::MetalLUFactorization, ::Matrix{Float32}, ::Vector{Float32}, ::Vector{Float32})

ChrisRackauckas commented 1 year ago

You need to re-instantiate for it to bring in the extension from a PR branch.

ViralBShah commented 1 year ago

Why do we need MKL 32-bit?

ChrisRackauckas commented 1 year ago

GPU comparison, Neural ODE training, mixed precision algorithms, etc.