Closed pabloferz closed 6 years ago
Merging #5 into master will not change coverage. The diff coverage is
100%
.
@@ Coverage Diff @@
## master #5 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 3 3
Lines 85 90 +5
=====================================
+ Hits 85 90 +5
Impacted Files | Coverage Δ | |
---|---|---|
src/genz-malik.jl | 100% <100%> (ø) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 43e5f12...d3b4288. Read the comment docs.
Maybe add a test? 😃
The thing that bugs me about this is the precision of the test. I feel like it should somehow be tied to an error estimate computed from f
, but I'm not sure how to do this in a reasonable way.
Right now I'm not sure what would be an appropriate way of doing that. Does hcubature(f, (0.0,0.0,-0.2), (0.2,2π,0.2), rtol=1e-6)[1] ≈ (22502//140625)*π rtol=1e-6
seems better than what I currently have?
It certainly seems to help this particular integral, but any fix that depends on exact equality seems very fragile.
One thing we could do would be to use the current error estimate E
in the integral as a measure of the uncertainty in f
, i.e. δf = E/volume
, and the do all comparisons with that absolute tolerance.
It certainly seems to help this particular integral, but any fix that depends on exact equality seems very fragile.
Agreed.
One thing we could do would be to use the current error estimate E in the integral as a measure of the uncertainty in f, i.e. δf = E/volume, and the do all comparisons with that absolute tolerance.
Do you mean that we could choose by width along an axis when maxdiff < δf && divdiff < δf
? That is somewhat along the lines that what I had proposed above (setting divdiff = divdiff < eps(T) ? zero(divdiff) : divdiff
, simililarly to here) except that the tolerance is independent of the box.
I pushed some changes that should make this more robust. The scheme would be the following:
ε
.divdiff
is greater than the maximum fourth difference found so far maxdivdiff
by at least ε
, set the axis to the current dimension.abs(divdiff - maxdivdiff) < ε
then subdivide along the the dimension with larger width.I have set ε
to the machine epsilon and not to a value depending on the error of the integral over the current box, as we would have to store the fourth differences to choose the axis at the end. I believe this should be good enough but if there's a stronger argument for setting ε
to another value, this could be changed.
I was thinking of essentially what you have in your current PR except with ε
replaced by δf
; I agree that this complicates the implementation, however.
(A tolerance independent of what the integrand is doing doesn't make sense to me. eps(typeof(maxdivdiff))
is an especially poor choice because it is scaled incorrectly; equivalently, it is dimensionless while δ
is dimensionful, so your units are wrong.)
Note that my suggested δf
has the nice property (for sufficiently smooth integrands) that it goes to zero asymptotically faster than divdiff
as the box size shrinks, so for a sufficiently small box size it is equivalent to an equality test.
Forgot about dimensions. As I commented above I set the tolerance to a fixed value to avoid storing the fourth differences and choosing the axis at the end, but that is probably not of significant cost compared with the rest of the routine.
You could useeps(norm(f₁))
to have the correct units. However, as the integrand becomes more oscillatory, the divided differences become less accurate (basically, they are an estimate for the second derivative along each direction), which is why I suggested a δf
that depends on how oscillatory the function is (as measured by the error estimate).
I agree that this makes the implementation more complicated, but I think it is worth it.
δf = E / V
seems to require more function evaluations (see the failling tests) for many integrands. I tested locally using either eps(norm(I) / V)
and eps(E / V)
and none of those seem to cause any problems.
We can always use δf = 0.1 * E / V
or any similar scale factor; the scaling here seems pretty heuristic. eps(E / V)
seems unreasonably small, though.
(The failed test seems to be only a 10% increase in function evaluations, though, which isn't too bad.
Slightly decreasing δf
as suggested above might erase even this difference.)
I did some experimenting and found that δf = 0.001 * E / V
gives the same function evaluations in the test example while solving the problem with the example from #4.
Sorry for the long delay; this looks ready to merge.
Fixes https://github.com/stevengj/HCubature.jl/issues/4
When the fourth difference is around the same value along every axis, we should choose the axis that corresponds to the largest width to ensure we sample better the whole box.
C.c. @stevengj