Closed matt-graham closed 1 year ago
- If I am not mistaken, then
norm_order == 1
in fact computes the L^2-normufl.inner(interpolated_error, interpolated_error)
of the error rather than the L^1-norm as promised. The casenorm_order == 2
then computes the L^1-norm.
Yep the definitions here are definitely transposed from what they should be, thanks for catching this, will update.
- I believe that the case
norm_order == "inf"
will only give the expected result for finite element spaces in which the degrees of freedom are given by point evaluations. Maybe it would be useful to indicate this in the docstring (ideally one would throw an error if the FESpace does not fulfil said property).
Both documenting and throwing an error seem sensible here. As we interpolate the functions into another function space anyway we could also enforce this there.
A minor remark about
test_dxh.py
:
- The approximation error in the L2-norm should scale as
1.0 / number_cells_per_axis**(degree+1)
.
Thanks - will update this for the L2 case. You don't happen to know any similar results for L1 / L-infinity norms?
As we interpolate the functions into another function space anyway we could also enforce this there.
- Yes, sounds reasonable. So one could check if
family = function_1.function_space.ufl_element().family()
is a nodal element e.g. 'CG', ... .If this isn't the case we replacefamily
with a 'CG' element.- Provided that the function to be interpolated is sufficiently smooth, the approximation results should hold true for general $L^p$ with $p \in [1,\infty]$.
@janoschpreuss I've now fixed the transposition of norm_order=1
and norm_order=2
implementations and updated the test to check error is order 1.0 / number_cells_per_axis**(degree+1)
.
With regards to adding a check on the finite element family for $L^\infty$ implementation - am I correct in thinking the families for which the current implementation is valid would be any of the scalar-valued elements listed in https://defelement.com/elements/index.html for which the DOFs entry lists only point evaluations (of value) so for example
Valid:
Invalid:
I can't see any nice way to check this programatically in Basix / DOLFINx interfaces, but we could possibly check if the family is a few known common valid cases and change to Lagrange family for function space interpolated in to and issue a warning otherwise.
@matt-graham : The updates look good :+1:
Concerning $L^\infty$:
am I correct in thinking the families for which the current implementation is valid would be any of the scalar-valued elements listed in https://defelement.com/elements/index.html for which the DOFs entry lists only point evaluations (of value) so for example
Yes, this is true. A remark concerning the invalid cases: One of the reasons for avoiding point evaluations in the definition of the dofs is to be able to interpolate functions with weaker regularity, e.g. only $L^2$-functions for which point evaluations are not possible. Consequently, it wouldn't make sense to change to Lagrange family for function space interpolated in to in this case. However, measuring the discrete $L^\infty$-error on the dofs, i.e. error_local = np.max(abs(interpolated_error.x.array))
continues to make sense even for rough functions. I would therefore suggest to leave the function as is, but change our interpretation, i.e. the label "inf"
in norm_order: Literal[1, 2, "inf"]
to something like "inf-dof"
to make clear that we are measuring the discrete $L^\infty$-error on the dofs here, which for the valid elements gives the $L^\infty$ error.
Documentation workflow fail seems to be specifically happening for the merge of changes on this branch with current main
as tox -e docs
runs without a problem locally on this branch but I get the same
Warning, treated as error:
autodoc: failed to import module 'dxh'; the following exception was raised:
generic_type: cannot initialize type "Reduction": an object with that name is already defined
error as in the Actions workflow when running on a branch which merges this branch onto current main
. Currently investigating 😕.
I still don't really understand entirely what is causing the problem, but it seems to be some bad interaction between pybind11
/ fenicsx
and Sphinx v7. The generic_type: cannot initialize type "Reduction": an object with that name is already defined
error seems to be getting raised by pybind11
in these lines, and appears to be due to there already being a Reduction
type defined somewhere else, but I am not sure why we only get this error when importing the dxh
module within sphinx-build
and with Sphinx v7 (installing Sphinx v7 in a Conda environment alongside dxh
and then importing does not raise the error). Pinning Sphinx version in tox
dependencies for docs
environment seems to fix issue though.
Fixes #2
Adds a function for computing L1, L2 and L-infinity error norms based on examples at https://jsdokken.com/dolfinx-tutorial/chapter4/convergence.html#reliable-error-norm-computation