Closed santiagobadia closed 3 years ago
One issue that is important in terms of performance/correctness @dagarcial
I think you don't need a @law
here:
You can just use a function in terms of x
, i.e.,
const n_src = ...
const xsrc = ...
const sigma_s = ...
function fe(x) # Source as exponential function
s = 0
for i in 1:n_src
s += ((-1)^(i+1))*σ_m*exp(-((x[1]-xsrc[i,1])^2+(x[2]-xsrc[i,2])^2)/(2*σ_s^2))
end
return s
end #function
And then
function res_electrostatics(c,ϕ,ξ)
∇(ξ)*σ(c,∇(ϕ)) + ξ*fe
end
Replace
n_src = 2
xsrc = zeros((2,2))
xsrc[1,:] = [0.4,1.0]
xsrc[2,:] = [0.6,1.0]
with
const n_src = 2
const xsrc = zeros(0.4,1.0,0.6,1.0)
for performance and put the whole driver in a function for JIT compilation
One issue that is important in terms of performance/correctness @dagarcial
I think you don't need a
@law
here:You can just use a function in terms of
x
, i.e.,const n_src = ... const xsrc = ... const sigma_s = ... function fe(x) # Source as exponential function s = 0 for i in 1:n_src s += ((-1)^(i+1))*σ_m*exp(-((x[1]-xsrc[i,1])^2+(x[2]-xsrc[i,2])^2)/(2*σ_s^2)) end return s end #function
And then
function res_electrostatics(c,ϕ,ξ) ∇(ξ)*σ(c,∇(ϕ)) + ξ*fe end
I modified the code in accordance with this comment.
Replace
n_src = 2 xsrc = zeros((2,2)) xsrc[1,:] = [0.4,1.0] xsrc[2,:] = [0.6,1.0]
with
const n_src = 2 const xsrc = zeros(0.4,1.0,0.6,1.0)
for performance and put the whole driver in a function for JIT compilation
Fixed. However, I code this part considering that in the future the number of sources (electrodes) will be large. I do not feel comfortable filling a 1D array to set the coordinates of 100 electrodes or more.
In the future, it must be created at least a function that goes from a list with coordinates as columns to a vector, if the vector is the optimal way.
I do not understand what is "put the whole driver in a function for JIT compilation"
You can always create your array using a function with whatever you want and then create an immutable Tuple out of it:
a = # create here your 100x100 array
const b = Tuple(a)
For the driver I mean
function my_solver(order,partition)
# and all your driver here
end
order = [2,5,9,27]
partition = 100
my_solver(order,partition)
This way, the JIT compiler will compile your driver and optimise.
I urge you to read
https://docs.julialang.org/en/v1/manual/performance-tips/index.html
I pushed a new version of the code. This version is computing a steady-state Henry problem version (density-driven flow problem).
Consider that the formal henry problem is on time. However, since it reaches a steady-state work to check the code before implement the time discretization.
Solving this problem I found and issue. I haven't checked numerically, just graphically. While concentration obtained with Gridap looks similar to the papers solutions, the velocity field is not the same. I am not sure the reason, but I assume that is not a problem of Gridap but a formulation problem.
This version fixed the problem I had with the dispersion coefficient. The problem was related with the term of the formula that has the outer product of u (velocity) over the norm of u. Usually, for me uxu/|u| should not has any problem when u=0, but in this case, it does.
Additional issues:
About the Robin boundary conditions, I observed that Gridap uses FESource to impose the Neumann boundary conditions. But there is not an explicit way to tell if the integration is over the domain or over the boundary. The robin bc required a boundary integration at the LHS, including the unknown of the potential field. I can not add directly that term to the residual since Gridap will understand that the integral is over the domain, and I can not add it to the Boundary function since I required to add the unknown of the potential field. Can you tell me what is te correct way to add this kind of terms to the code?.
Hi @dagarcial
Solving this problem I found and issue. I haven't checked numerically, just graphically. While concentration obtained with Gridap looks similar to the papers solutions, the velocity field is not the same. I am not sure the reason, but I assume that is not a problem of Gridap but a formulation problem.
With this information, I can't help much. The methods you are using have been extensively tested by other developers for many different problems. So, I wouldn't expect it to be a Gridap issue, but you never know. It would be nice to identify better the problem.
This version fixed the problem I had with the dispersion coefficient. The problem was related with the term of the formula that has the outer product of u (velocity) over the norm of u. Usually, for me uxu/|u| should not has any problem when u=0, but in this case, it does.
0/0 is mathematically undetermined, so once you have it in a code, in the very rare situation it does not provide an exception (what I always expect) you would get rubish.
Additional issues:
About the Robin boundary conditions, I observed that Gridap uses FESource to impose the Neumann boundary conditions. But there is not an explicit way to tell if the integration is over the domain or over the boundary. The robin bc required a boundary integration at the LHS, including the unknown of the potential field. I can not add directly that term to the residual since Gridap will understand that the integral is over the domain, and I can not add it to the Boundary function since I required to add the unknown of the potential field. Can you tell me what is te correct way to add this kind of terms to the code?.
Look e.g. at this tutorial
Note that you have a triangulation and a boundary triangulation, and a quadrature and a boundary quadrature, and then you create FETerms with the triangulation/quadrature for the bulk or boundary triangulation/quadrature for boundary terms. All you need is in this tutorial.
About the time discretization, I test the code with GridapODEs but I couldn't make it work. Do you know if ODEs is working for multiphysics?.
Yes, @oriolcg has used it for a transient fluid-structure interaction problem and Navier-Stokes incompressible.
https://github.com/gridap/Tutorials/blob/transient_nsi/src/transient_inc_navier_stokes.jl
But there is always the possibility of a bug. You could push the transient example, and create an issue with the problem. I can take a look.
After reading some comments in the code, it seems we have some issues to discuss:
@law
for dispersion is not working. We should at least try to identify which is the problem.On the other hand, there are some additional developments:
GridapODEs
.With regard to profiling:
Gridap
vsFEMPAR
. (Take into account that the first run involves JIT, consider a one warm-up solve first.)Julia
\
, not sure how efficient it is going to be.What else?