Open termi-official opened 1 year ago
I don't think this is correct, and currently, the code is special-cased for vector fields.
We should probably adopt the method from Ferrite: https://github.com/Ferrite-FEM/Ferrite.jl/blob/fe44e7f6297158e9bbb62fd4bc9adaa1d79929f2/src/FEValues/common_values.jl#L107-L110
I.e., something like the following
divergence(f::F, v::Vec) = divergence_from_gradient(gradient(f, v))
# divergence_from_gradient(g::Vec) = sum(g) # Not sure why this one is in Ferrite - should never be encountered as this implies calling divergence on a scalar which is not well-defined I think?
divergence_from_gradient(g::SecondOrderTensor) = tr(g)
divergence_from_gradient(g::Tensor{3,dim,T}) where {dim,T} = g \boxdot one(Tensor{2,dim,T})
Edit: I noticed that there seem to be different definitions in the literature for the definition of the divergence of (nonsymmetric) 2nd-order tensor fields. I'm following Bonet and Wood (2008), eq 2.134, which defines the divergence as $$\mathrm{grad}(\boldsymbol{S}) : \boldsymbol{I} = \frac{\partial S_{ij}}{x_j} \boldsymbol{e}_i$$
Btw. I also noticed that ((t::Tensor{3})[:,1,1])::Vector
which is not ideal, so this should also be fixed at some point!
Good point, I guess the divergence of a tensor is in general not the trace of the gradient. :)
I think that your definition holds only for symmetric tensors $S$. Let us assume Cartesian coordinates, then $$div(S) = \nabla \cdot S = \partiali S{ik} e_k $$ where the first equal just uses the assumption of Cartesian spaces and the second equal uses the definitions of nabla and A+standard tensor algebra. Also in Cartesian coordinates we have $$tr(grad(S)) = tr(\partialk S{ij} e_i \otimes e_j \otimes e_k) = \partiali S{ij} e_j $$ where the trace is used as above defined. Am I missing something else?
For the definition, yes these are the different definitions I was mentioning, either $$\mathrm{div}(\boldsymbol{S}) = \nabla \cdot \boldsymbol{S}$$ or $$\mathrm{div}(\boldsymbol{S}) = \boldsymbol{S} \cdot \nabla$$
Where the latter is used in, e.g. Bonet and Wood, but I've seen the one you refer to at other sources as well (such as the Wikipedia page for divergence of 2nd order tensors).
For the second point, the difference between your code and the tensor expression is that the codes sum all entries, the "correct" code would be
Vec{dim}(i->sum(t[k,i,l]*(k==l) for k in 1:dim, l in 1:dim))
(And for talking about definitions, we haven't started with the curl of 2nd order yet, I think I've seen 4 different definitions there, see my course notes here :))
xref #210 for further discussions about divergence and curl definitions in general
MWE
Edit: Fix should be as easy as
but I have no good idea how to test this.