gridapapps / GridapGeosciences.jl

Gridap drivers for geoscience applications
MIT License
35 stars 3 forks source link

Misc comments #1

Closed santiagobadia closed 3 years ago

santiagobadia commented 4 years ago

After reading some comments in the code, it seems we have some issues to discuss:

  1. The @law for dispersion is not working. We should at least try to identify which is the problem.
  2. The doubt abut why a minus in the Neumann term. I think the answer is simple, since the problem is nonlinear, we are providing the solver the residual. The Neumann term is one of these terms, and in the residual it comes with a minus, right?

On the other hand, there are some additional developments:

  1. Create tests with a method of manufactured solutions that will tell us whether the implementation is correct.
  2. Implement the transient version of the driver using GridapODEs.

With regard to profiling:

  1. Compare the computation time of the integration/assembly of Gridap vs FEMPAR. (Take into account that the first run involves JIT, consider a one warm-up solve first.)
  2. Consider using a cheaper iterative e.g. multigrid solver or a PETSc solver. Right now, we are using the Julia \, not sure how efficient it is going to be.

What else?

santiagobadia commented 4 years ago

One issue that is important in terms of performance/correctness @dagarcial

I think you don't need a @law here:

https://github.com/gridapapps/GridapGeosciences.jl/blob/f77ed1002890856e8efe96513c31037da49d448d/src/DarcyElectricTransport.jl#L117

You can just use a function in terms of x, i.e.,

const n_src = ...
const xsrc = ...
const sigma_s = ...

function fe(x)  # Source as exponential function
  s = 0
  for i in 1:n_src
    s += ((-1)^(i+1))*σ_m*exp(-((x[1]-xsrc[i,1])^2+(x[2]-xsrc[i,2])^2)/(2*σ_s^2))  
  end
  return s
end #function

And then

function res_electrostatics(c,ϕ,ξ)
   ∇(ξ)*σ(c,∇(ϕ)) + ξ*fe
end
santiagobadia commented 4 years ago

Replace

n_src = 2
xsrc = zeros((2,2))
xsrc[1,:] = [0.4,1.0]
xsrc[2,:] = [0.6,1.0]

with

const n_src = 2
const xsrc = zeros(0.4,1.0,0.6,1.0)

for performance and put the whole driver in a function for JIT compilation

dagarcial commented 4 years ago

One issue that is important in terms of performance/correctness @dagarcial

I think you don't need a @law here:

https://github.com/gridapapps/GridapGeosciences.jl/blob/f77ed1002890856e8efe96513c31037da49d448d/src/DarcyElectricTransport.jl#L117

You can just use a function in terms of x, i.e.,

const n_src = ...
const xsrc = ...
const sigma_s = ...

function fe(x)  # Source as exponential function
  s = 0
  for i in 1:n_src
    s += ((-1)^(i+1))*σ_m*exp(-((x[1]-xsrc[i,1])^2+(x[2]-xsrc[i,2])^2)/(2*σ_s^2))  
  end
  return s
end #function

And then

function res_electrostatics(c,ϕ,ξ)
   ∇(ξ)*σ(c,∇(ϕ)) + ξ*fe
end

I modified the code in accordance with this comment.

dagarcial commented 4 years ago

Replace

n_src = 2
xsrc = zeros((2,2))
xsrc[1,:] = [0.4,1.0]
xsrc[2,:] = [0.6,1.0]

with

const n_src = 2
const xsrc = zeros(0.4,1.0,0.6,1.0)

for performance and put the whole driver in a function for JIT compilation

Fixed. However, I code this part considering that in the future the number of sources (electrodes) will be large. I do not feel comfortable filling a 1D array to set the coordinates of 100 electrodes or more.

In the future, it must be created at least a function that goes from a list with coordinates as columns to a vector, if the vector is the optimal way.

I do not understand what is "put the whole driver in a function for JIT compilation"

santiagobadia commented 4 years ago

You can always create your array using a function with whatever you want and then create an immutable Tuple out of it:

a = # create here your 100x100 array
const b = Tuple(a)

For the driver I mean

function my_solver(order,partition)
# and all your driver here
end 

order = [2,5,9,27] 
partition = 100
my_solver(order,partition)

This way, the JIT compiler will compile your driver and optimise.

I urge you to read

https://docs.julialang.org/en/v1/manual/performance-tips/index.html

dagarcial commented 4 years ago

I pushed a new version of the code. This version is computing a steady-state Henry problem version (density-driven flow problem).

Consider that the formal henry problem is on time. However, since it reaches a steady-state work to check the code before implement the time discretization.

Solving this problem I found and issue. I haven't checked numerically, just graphically. While concentration obtained with Gridap looks similar to the papers solutions, the velocity field is not the same. I am not sure the reason, but I assume that is not a problem of Gridap but a formulation problem.

This version fixed the problem I had with the dispersion coefficient. The problem was related with the term of the formula that has the outer product of u (velocity) over the norm of u. Usually, for me uxu/|u| should not has any problem when u=0, but in this case, it does.

Additional issues:

santiagobadia commented 4 years ago

Hi @dagarcial

Solving this problem I found and issue. I haven't checked numerically, just graphically. While concentration obtained with Gridap looks similar to the papers solutions, the velocity field is not the same. I am not sure the reason, but I assume that is not a problem of Gridap but a formulation problem.

With this information, I can't help much. The methods you are using have been extensively tested by other developers for many different problems. So, I wouldn't expect it to be a Gridap issue, but you never know. It would be nice to identify better the problem.

This version fixed the problem I had with the dispersion coefficient. The problem was related with the term of the formula that has the outer product of u (velocity) over the norm of u. Usually, for me uxu/|u| should not has any problem when u=0, but in this case, it does.

0/0 is mathematically undetermined, so once you have it in a code, in the very rare situation it does not provide an exception (what I always expect) you would get rubish.

Additional issues:

About the Robin boundary conditions, I observed that Gridap uses FESource to impose the Neumann boundary conditions. But there is not an explicit way to tell if the integration is over the domain or over the boundary. The robin bc required a boundary integration at the LHS, including the unknown of the potential field. I can not add directly that term to the residual since Gridap will understand that the integral is over the domain, and I can not add it to the Boundary function since I required to add the unknown of the potential field. Can you tell me what is te correct way to add this kind of terms to the code?.

Look e.g. at this tutorial

https://gridap.github.io/Tutorials/dev/pages/t006_dg_discretization/#Tutorial-6:-Poisson-equation-(with-DG)-1

Note that you have a triangulation and a boundary triangulation, and a quadrature and a boundary quadrature, and then you create FETerms with the triangulation/quadrature for the bulk or boundary triangulation/quadrature for boundary terms. All you need is in this tutorial.

About the time discretization, I test the code with GridapODEs but I couldn't make it work. Do you know if ODEs is working for multiphysics?.

Yes, @oriolcg has used it for a transient fluid-structure interaction problem and Navier-Stokes incompressible.

https://github.com/gridap/Tutorials/blob/transient_nsi/src/transient_inc_navier_stokes.jl

But there is always the possibility of a bug. You could push the transient example, and create an issue with the problem. I can take a look.