JuliaApproximation / ApproxFun.jl

Julia package for function approximation
http://juliaapproximation.github.io/ApproxFun.jl/
Other
538 stars 71 forks source link

Double roots sometimes missed #447

Closed dpsanders closed 7 years ago

dpsanders commented 7 years ago

Double roots are sometimes missed, even though the colleague matrix eigenvalues are correctly found.

julia> a = 2; f = Fun(x -> (x^2-2)^2, -a..a); roots(f)
2-element Array{Float64,1}:
 1.41421
 1.41421

julia> eigvals(ApproxFun.colleague_matrix(f.coefficients))
4-element Array{Complex{Float64},1}:
 -0.707107+6.89931e-9im
 -0.707107-6.89931e-9im
           0.707107+0.0im
           0.707107+0.0im
dpsanders commented 7 years ago

This may just be a problem with the tolerance on the imaginary part?

dlfivefifty commented 7 years ago

It looks like it. The tolerance on the complex part has not been thought through. complexroots gives all the roots.

At the moment, its using LAPACKs upper hessenberg eigvals, with an adhoc column balancing. Using dense eigvals sometimes performs better as that does LAPACKs inbuilt column balancing for you. I never figured out how to access this.

Another option is to switch to AVWM.jl, but that is slower at small dimensions, and roots for Chebyshev only needs to deal with low degree roots as it auto-subdivides

Sent from my iPhone

On 10 Dec. 2016, at 01:57, David P. Sanders notifications@github.com wrote:

This may just be a problem with the tolerance on the imaginary part?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

ajt60gaibb commented 7 years ago

It is perfectly reasonable for a numerically stable polynomial rootfinder to miss a double root. Just from perturbing the original polynomial, a stable rootfinder on the example above can either return two simple roots or miss the double root entirely. If you change the tolerance on the complex part for masking, then the rootfinder will start to project more imaginary roots onto the real line.

Missing double roots like this is a consequence of working in floating point arithmetic.

dlfivefifty commented 7 years ago

I guess it would become reasonable if you use interval arithmetic: that is, you look for the set of all x such that f(x) is within eps of 0.

@dpsanders and I were talking about incorporating interval arithmetic in ApproxFun to get rigorous calculations.

Sent from my iPad

On 10 Dec. 2016, at 7:04 am, Alex Townsend notifications@github.com wrote:

It is perfectly reasonable for a numerically stable polynomial rootfinder to miss a double root. Just from perturbing the original polynomial, a stable rootfinder on the example above can either return two simple roots or miss the double root entirely. If you change the tolerance on the complex part for masking, then the rootfinder will start to project more imaginary roots onto the real line.

Missing double roots like this is a consequence of working in floating point arithmetic.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

MikaelSlevinsky commented 7 years ago

Does the balancing scale only by powers of 2? This would ensure that no rounding errors (provided coefficients are not near the type min/max) pollute the colleague matrix before passing it to an upper-Hessenberg eigvals

dlfivefifty commented 7 years ago

The scaling I used was completely ad hoc: I just tried something and it fixed a bug. LAPACK has its own scaling for general eigenvalue problems which would be much better to yse, but not one that's exposed for Hessenberg matrices, as far as I can tell.