Open lstagner opened 1 year ago
Quick workaround: turn-off adaption.
julia> m2 = central_fdm(5, 1; adapt=0);
julia> m2(f, 0.0)
0.0 ± 0.00014
There are a few places in the code, namely
https://github.com/JuliaDiff/FiniteDifferences.jl/blob/93cd547363ab182ea42ddff7c112824d147d923c/src/methods.jl#L246-L250 and https://github.com/JuliaDiff/FiniteDifferences.jl/blob/93cd547363ab182ea42ddff7c112824d147d923c/src/methods.jl#L263
where we use T(step)
to convert the step
into the same type as the primal value x
.
This was done when chasing down allocations from type-instabilities.
Mathematically, I don't think this is required.
We just need x+step
to be the same type as x
.
However, I don't thing step::Measurement{Float64}
satisfies that: x + step
gives a result that is a Measurement{Float64}
.
But the question then is where is our step
coming from?
That is coming out of FiniteDifferences.estimate_step
julia> FiniteDifferences.estimate_step(m, f, 0.0)
(0.00097 ± 2500.0, 4.3e-13 ± 1.1e-6)
Something is off somewhere in the adaption logic of https://github.com/JuliaDiff/FiniteDifferences.jl/blob/93cd547363ab182ea42ddff7c112824d147d923c/src/methods.jl#L362-L426
because it is yielding a step
that is of the same type as f(x)
rather than the type of x
.
@wesselb might have ideas
but otherwise this is going to take a fair bit of digging to uncover.