gridap / GridapSolvers.jl

Solvers and preconditioners for the Gridap ecosystem.
https://gridap.github.io/GridapSolvers.jl/
MIT License
10 stars 1 forks source link

Using BackSlashSolver as default coarsest grid solver in serial computations is dangerous #28

Closed amartinhuertas closed 3 months ago

amartinhuertas commented 1 year ago

The GMG preconditioner constructor uses Gridap.Algebra.BackslashSolver() as the default solver for the coarsest grid problem. In particular, in the following line:

https://github.com/gridap/GridapSolvers.jl/blob/47955a82d55958cc0585780325c005db3755efe2/src/LinearSolvers/GMGLinearSolvers.jl#L18

I think this is dangerous from the performance point of view as Gridap.Algebra.BackslashSolver() is implemented such that for each call to solve!(...) it computes the full LU factorization.

I would use LUSolver() instead, which performs the factorization once, and uses backward/forward substitution within each call to solve!(...). However, when leveraging LUSolver() in a single process, I have realized that the following line:

https://github.com/gridap/GridapSolvers.jl/blob/47955a82d55958cc0585780325c005db3755efe2/src/LinearSolvers/GMGLinearSolvers.jl#L189

triggers an error, as xh and rh are of type SubVector, and solve!(...) is not overriden for that particular combination of types. The work-around around this is to define an additional method specialization:

function LinearAlgebra.ldiv!(Y::SubArray, 
                             A::LinearAlgebra.LU, 
                             B::SubArray)
  LinearAlgebra.ldiv!(collect(Y),A,collect(B))
end

However, I am not sure if this is the best way to go. In particular, I wonder whether there is a more efficient way of dealing with the transformation of SubArray to Array. Two possible ideas: (1) completely avoid the copy if the SubArray is consecutive. (2) if it is not consecutive, perhaps to have arrays in a cache such that we reduce overhead associated to dynamic memory allocation.

On the other hand, another consideration is that BackSlashSolver() is implemented in GridapDistributed such that it is able to handle distributed data objects for the case in which the coarsest grid problem is distributed among processes. I guess this is not the case for LUSolver(). I did not check it.

JordiManyer commented 1 year ago

Mhm, I think we can get around SubVector:

Right now, the SubVector typing appears because of

map_parts(xh.owned_values,rh.owned_values) do xh, rh
       solve!(xh,caches,rh)
end

but I believe that we can actually use

map_parts(xh.values,rh.values) do xh, rh
       solve!(xh,caches,rh)
end

since there are no ghost values if we have a single processor. That should get rid of the weird typing.

What do you think @amartinhuertas ?

amartinhuertas commented 1 year ago

What do you think @amartinhuertas ?

Yes, I think that would solve one of the two issues. I will do it.

The second issue I guess should be easy to be solved (if it is not solved already, I have to check, perhaps it is not an issue at all)