gridap / GridapODEs.jl

Time stepping for Gridap
MIT License
33 stars 7 forks source link

Support for GridapDistributed #66

Closed oriolcg closed 2 years ago

oriolcg commented 2 years ago

Hi @santiagobadia, @amartinhuertas. @fverdugo,

I started the developments to support GridapDistributed in GridapODEs. The idea that I had in mind is try to keep the same structures and just add support for the new types that are not in Gridap. Do you think this is the way forward? Did you have something already in mind for the transient case?

For the moment, I just had to adjust some types and add a function to support concatenation of several jacobians with the PartitionedArrays structures. You can check the changes introduced so far here.

The current status is able to assemble (not checked if correctly) the vectors and matrices of a transient problem. However, I'm facing some issues when operating. In particular, I get an error when doing a matvec multiplication here: https://github.com/gridap/GridapODEs.jl/blob/819bfeba5d75cfcc167113d4158f697493068f44/src/ODETools/AffineThetaMethod.jl#L71

The error that I get is a failed check at https://github.com/fverdugo/PartitionedArrays.jl/blob/821376a3de5c51a18a9b350a13b5261782ae84ce/src/Interfaces.jl#L2086

The code that I'm using is the following:

module tmp
using Gridap
using GridapODEs.ODETools
using GridapODEs.TransientFETools
using PartitionedArrays
using GridapDistributed

const L = 2π
const n = 4
const order = 1
parts = get_part_ids(sequential,(2,))
domain = (0,L)
cells = (n,)
𝒯 = CartesianDiscreteModel(parts,domain,cells)
Ω = Interior(𝒯)
dΩ = Measure(Ω,2*order)
refFEᵤ = ReferenceFE(lagrangian,VectorValue{1,Float64},order)
V = TestFESpace(𝒯,refFEᵤ)
U = TransientTrialFESpace(V)
m(uₜ,v) = ∫( uₜ⋅v )dΩ
a(u,v) = ∫( ∇(u)⊙∇(v))dΩ
b(v) = 0
op = TransientConstantFEOperator(m,a,b,U,V)
u₀(x) = VectorValue(cos(x[1]))
xₕ₀ = interpolate(u₀,U(0.0))
linear_solver = BackslashSolver()
ode_solver = ThetaMethod(linear_solver,0.1,0.5)
xₕₜ = solve(ode_solver,op,xₕ₀,0,0.1)
for (xₕ,t) in xₕₜ
  xₕ
end

end

I believe that the error comes from the fact that I'm not handling correctly some vector, but probably some of you could help me figure out faster where I might be doing something wrong.

Thank you!

amartinhuertas commented 2 years ago

Hi @oriolcg! I am currently on leave till 17th January but I read your post and I think (may be I am wrong, I am on a mobile phone with a poor internet connection) I know the cause of the problem with matvec, so that I may be able to save you some debugging time. (To answer the rest of your questions I need some extra time after I come back). I indeed faced the same problem while parallelizing GridapGeosciences.jl. Long history short: the data distribution layout (i.e. index ranges) of the columns of the matrix and that of the fe space (and thus those of the array of free dof values) do not match. To workaround it you have to transform the data distribution layout of the latter to match the one of the former. To this end, see the following code in gridapgeosciences.jl https://github.com/gridapapps/GridapGeosciences.jl/blob/bcc9ed176abf324c7ffbb6db2313147d65dee6a0/src/DiagnosticTools.jl#L82. Anyway, do not take this as a definite solution, just as a workaround. @fverdugo and I have discussed several times about the pros and cons of the current design of distributed data layouts of Fe spaces versus linear algebra and this may change in the future. Ideally we should not expose this to the user, either by hiding the conversion under the hood or designing everything such that conversion is avoided at all.

oriolcg commented 2 years ago

Thanks @amartinhuertas for the quick reply. I'll take a look at this workaround and try to see how to implement in GridapODEs.

On a side note, I have a first simple test working for the distributed case (simple heat equation with single field and without matvec operations), see d4e8f455b7b2febedc29b082d2ad7c40c0e0976a

santiagobadia commented 2 years ago

Work in GridapDistributed