Closed Kolaru closed 6 years ago
Merging #51 into master will increase coverage by
1.01%
. The diff coverage is97.43%
.
@@ Coverage Diff @@
## master #51 +/- ##
==========================================
+ Coverage 79.51% 80.53% +1.01%
==========================================
Files 8 8
Lines 332 339 +7
==========================================
+ Hits 264 273 +9
+ Misses 68 66 -2
Impacted Files | Coverage Δ | |
---|---|---|
src/complex.jl | 75% <100%> (+15%) |
:arrow_up: |
src/branch_and_prune.jl | 86.56% <97.05%> (+1.95%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update e235900...53b7683. Read the comment docs.
Many thanks for the contribution!
Looking at the branch_and_prune
code, it could really use a refactor to make the dimension a type parameter.
I think it is as it currently is because I was hoping that the generic branch_and_prune
function could apply also for functions from R^n -> R, in which case there's not a single dimension n, but rather an input and an output dimension.
Implementing Krawczyk should definitely be on the agenda!
(I apparently messed up the message of the last commit, git sometimes eludes me)
Here is a short summary of the changes in the new commit:
branch_and_prune
is no longer overloaded and now always has the signature branch_and_prune(X, contract, tol=1e-3)
. The function to be applied is no longer passed as an argument as it is not explicitly needed (it is accessed through the separator).roots
is now fully responsible to correctly create the separator.Newton(f_prime::Function)
still returns a NewtonConstructor
. The reason for that is that having the two syntaxes and roots(f, X, Newton)
and roots(f, X, Newton(f_prime))
has the following consequence: either Newton
and Newton(f_prime)
return the same type or not.In the first case, whether an explicit derivative have been passed must be somehow checked in roots
and if it is the case the function f
must be added to the separator. This makes the code both messy and kind of convoluted.
In the second case the fact that Newton
and Newton(f_prime)
return different types can be used to distinguish the two syntaxes in roots
. This is the choice to which I have sticked for now. However despite the fact that Newton
can return multiple types, the code is still type stable: for a given signature of the Newton
, the function always has the same return type, so it can be inferred.
Hope this explanation helps.
Sorry, I wasn't suggesting your code was bad, but rather that my original code needed a refactor, but it sounds like you've done it already -- thanks! I'll take a careful look.
But I see the following simple benchmarks:
Current master:
@time roots(x->x^2 - 2, -10..10)
0.043622 seconds (21.65 k allocations: 1.096 MiB)
2-element Array{IntervalRootFinding.Root{IntervalArithmetic.Interval{Float64}},1}:
Root([1.41421, 1.41422], :unique)
Root([-1.41422, -1.41421], :unique)
This PR:
0.058387 seconds (27.36 k allocations: 1.389 MiB)
which is a 33% slowdown.
This is looking quite a lot cleaner than the original code, thanks!
I still haven't got my head around the need for NewtonConstructor
.
The performance difference is a deep mystery for me. If I run it using BenchmarkTools
I get for current master
@benchmark roots(x -> x^2 - 2, -10..10)
BenchmarkTools.Trial:
memory estimate: 163.00 KiB
allocs estimate: 4535
--------------
minimum time: 442.071 μs (0.00% GC)
median time: 513.917 μs (0.00% GC)
mean time: 592.733 μs (11.30% GC)
maximum time: 11.005 ms (93.67% GC)
--------------
samples: 8410
evals/sample: 1
and for this PR:
@benchmark roots(x -> x^2 - 2, -10..10)
BenchmarkTools.Trial:
memory estimate: 160.39 KiB
allocs estimate: 4444
--------------
minimum time: 424.843 μs (0.00% GC)
median time: 510.619 μs (0.00% GC)
mean time: 578.947 μs (11.10% GC)
maximum time: 11.543 ms (93.58% GC)
--------------
samples: 8606
evals/sample: 1
It look like there is a difference only if the function is run once. For example in current master we have:
@time for i in 1:100 roots(x -> x^2 -2, -10..10) end
0.096897 seconds (470.96 k allocations: 16.853 MiB, 10.24% gc time)
and for this PR:
@time for i in 1:100 roots(x -> x^2 -2, -10..10) end
0.105922 seconds (467.67 k allocations: 16.891 MiB, 9.34% gc time)
I have no idea where it could come from (and profiling will probably not help either).
And the need for a NewtonConstructor
is indeed not obvious, but I think it is the best (if not only) satisfying alternative for the current syntax (I would not have imagined it either if I hadn't tried to implement that ^^).
I'm happy to merge this and keep iterating in other PRs. What do you think?
I just fixed a bit of notation and simplified the Newton
as discussed. Now I thing it is ready to be merged :)
Very nice, thanks a lot!
Fixes #45
The tests mainly consist of comparing the default
Newton
method which usesForwardDiff
and passing the derivative/jacobian generated byForwardDiff
with the new syntax