Open mikeingold opened 1 month ago
I just merged #68 beginning an effort to restructure unit tests.
Code coverage is "officially" up to 100% as of the v0.14.0 release. However, this only verifies that all lines of code within this package are exercised during the tests, not that all possible combinations of integral argument types produce correct results.
Work is still ongoing to convert old-style generated @testset
's to new-style (combinations.jl
) @testitem
's.
As of #100 merging, the previous automatic test generation system (auto-tests.jl
) has been completely removed in favor of a collection of TestItems.jl @testitem
s (combinations.jl
).
Remaining work on this Issue:
@testitem
s still use a unit integrand (I.e. f(p) = 1.0
) and fairly simple geometry constructions with benchmark integral value provided by Meshes.measure
. These should be updated with more interesting functions and geometries with known analytics solutions.
The current unit testing structure was aimed mainly at testing all possible combinations
{f, G, IA}
wheref
is the integral function name/alias,G
is aMeshes.Geometry
sub-type, andIA
is an integration algorithm. In hindsight, the system I wrote for batching these tests was perhaps a little bit too clever for its own good. It also relies onMeshes.measure
to produce correct answers for a unit-valued integrand.As of 7-Sep-2024, we’re currently at about 77% code coverage per the Codecov report. Most of the gaps seem to be related to wrapper/convenience methods, i.e. ones that simply batch out to the core integration methods.
Ideas for a more robust test system:
Status of Unit Tests with Analytic Solutions
Ball
(2D)Ball
(3D)BezierCurve
Box
(1D)Box
(2D)Box
(3D)Box
(4D)Circle
Disk
Line
ParameterizedCurve
Plane
Quadrangle
Ray
Ring
Rope
Segment
Sphere
(2D)Sphere
(3D)Cone
ConeSurface
Cylinder
CylinderSurface
Ellipsoid
FrustumSurface
Hexahedron
ParaboloidSurface
Tetrahedron
Torus
Triangle