gridap / Gridap.jl

Grid-based approximation of partial differential equations in Julia
MIT License
711 stars 97 forks source link

Gradient of a constant times vector not working #388

Closed oriolcg closed 4 years ago

oriolcg commented 4 years ago

Hi @fverdugo @santiagobadia ,

In the latest version of Gridap (@0.14) I have an error when taking the gradient of a constant times a vector, see the following reproducer. This is needed when operating with averaged quantities, e.g. ϵ(uθ)=ϵ(θ*u+(1-θ)*uₙ).

module Reproducer
using Gridap

domain = (-1,1,-1,1)
partition = (2,2)
model = CartesianDiscreteModel(domain,partition)
trian = Triangulation(model)
quad = CellQuadrature(trian,2)

V = TestFESpace(
  model=model,
  valuetype=VectorValue{2,Float64},
  reffe=:Lagrangian,
  order=2,
  conformity =:H1,
)
U = TrialFESpace(V)

uh1 = interpolate(VectorValue(0.0,0.0),U)

F = integrate(∇(0.5*uh1),trian,quad)
end

The problem that I get is caused by the fact that the operation * between the constant and the vector converts the constant to a CellField (with constant values). Then, when taking the gradient, it does not go to the apply_gradient function specialized for the case when one of the arguments is of type ::Number, where the gradient of a product was omitting the gradient of the Number (https://github.com/gridap/Gridap.jl/blob/4480be9db683cd4df789d30e4b2026137ecd24f1/src/Fields/FieldOperations.jl#L207). Instead, it goes to the generic function https://github.com/gridap/Gridap.jl/blob/4480be9db683cd4df789d30e4b2026137ecd24f1/src/Fields/FieldOperations.jl#L217

Therefore, now we have the following operation happening in the code: ∇(a*uh)=∇(a)*uh + a*∇(uh). This results in a + operator between a VectorValue and a TensorValue which, of course, it's not supported.

This issue has three possible fixes: 1) Everything would work if the * operator would preserve the Number type, but I don't know if this is possible/desirable. 2) We could specialize the apply_gradient function that we have for Number, but in that case for FillArrays.Fill{Number, .....} (I don't really know the exact type that we should put there). 3) We could make it work for the general case making the result of the product of the first term (∇(a)*uh) a tensor, which is what it should be: ∇(a)*uh=∂a/∂xᵢ*uhⱼ. For this we would need that ∇(a) returns a VectorValue (now it returns an array of zeros) and instead of using the product *, use the outer product .

Option 1) is what we had until version 0.13. Do you have any insights on this? Am I missing any important point?

fverdugo commented 4 years ago

I don't understand why in version 0.13 worked and not in 0.14. This part seems the same (perhaps I am missing something). In both cases the Number object gets inserted into a Fill right?

fverdugo commented 4 years ago

Now I see, in Gridap 0.13 this method https://github.com/gridap/Gridap.jl/blob/1a6fd727805325b62bfe41899e8a101e69ad1d3e/src/Fields/FieldOperations.jl#L290 was not called (in contrast what I was expecting). The methed that is being called is the default one: https://github.com/gridap/Gridap.jl/blob/1a6fd727805325b62bfe41899e8a101e69ad1d3e/src/Fields/FieldArrays.jl#L144

Which iterates over cells and at each cell computes the gradient. To do so, this method is eventually called https://github.com/gridap/Gridap.jl/blob/1a6fd727805325b62bfe41899e8a101e69ad1d3e/src/Fields/FieldOperations.jl#L295

In Gridap 0.14 is not working since we are not calling the default implementation and we use this one https://github.com/gridap/Gridap.jl/blob/4480be9db683cd4df789d30e4b2026137ecd24f1/src/Fields/FieldOperations.jl#L217 as you said.

fverdugo commented 4 years ago

In conclusion, We need to replace this (which is never called)

    function apply_gradient(k::Valued{FieldOpKernel{typeof($op)}},a::Number,b)
      gb = field_array_gradient(b)
      apply(k,a,gb)
    end 

to this:

    function apply_gradient(k::Valued{FieldOpKernel{typeof($op)}},a::AbstractArray{<:Number},b)
      gb = field_array_gradient(b)
      apply(k,a,gb)
    end 

Idem for a,b::Number

@oriolcg can you do this ? and add some test that checks that it is indeed working?? Thanks!

oriolcg commented 4 years ago

Yes, I'll create a PR for this. Thanks for the help!

oriolcg commented 4 years ago

Now it tells me that it's ambiguous:

ERROR: MethodError: apply_gradient(::Gridap.Fields.Valued{Gridap.Fields.FieldOpKernel{typeof(*)}}, ::Array{Float64,1}, ::Array{Gridap.TensorValues.VectorValue{2,Float64},1}) is ambiguous. Candidates:
  apply_gradient(k::Gridap.Fields.Valued{Gridap.Fields.FieldOpKernel{typeof(*)}}, a::AbstractArray{#s8,N} where N where #s8<:Number, b) in Gridap.Fields at c:\Users\ocolomesgene\Progs\git-repos\gridap\Gridap.jl\src\Fields\FieldOperations.jl:208
  apply_gradient(k::Gridap.Fields.Valued{Gridap.Fields.FieldOpKernel{typeof(*)}}, b, a::AbstractArray{#s8,N} where N where #s8<:Number) in Gridap.Fields at c:\Users\ocolomesgene\Progs\git-repos\gridap\Gridap.jl\src\Fields\FieldOperations.jl:213
Possible fix, define
  apply_gradient(::Gridap.Fields.Valued{Gridap.Fields.FieldOpKernel{typeof(*)}}, ::AbstractArray{#s8,N} where N where #s8<:Number, ::AbstractArray{#s8,N} where N where #s8<:Number)

I guess it's because both arguments are arrays, but is Gridap.TensorValues.VectorValue{2,Float64} <: Number?

fverdugo commented 4 years ago

You can implement the extra method. But this is a bit extrange, are you calling something like field_array_gradient(fill(1.0,l)*fill(3.0,l)) ??

oriolcg commented 4 years ago

No, one fill has a constant and the other a VectorValue, something like that:

c = 0.5
vec = VectorValue(1.0,1.0)
cx = fill(c,np)
vecx = fill(vec,np)
acvecx = apply_to_field_array(FieldOpKernel(*),cx,vecx)
gacvecx = field_array_gradient(acvecx)
fverdugo commented 4 years ago

Yes, sure. If you want to call this, then you need to implement the extra method.

oriolcg commented 4 years ago

OK, I see now. I should use a Field. I'm creating the PR now.