Closed dlfivefifty closed 7 years ago
I should mention I only spot-checked all the special functions as I went, and didn't go through the (mind-numbing) process of testing each function and also writing tests. Perhaps this is part of a bigger conversation on resolution of functions by constructor vs. solution of its ODE.
The DDMF project is based on the notion that the ODE and some special values can be the main data structure for a special function, and their implementation is run in Maple. If \ can return BigFloat Funs, then this would probably be the way to go. However, I agree it's hard to convince someone that 0.00999983333401211
is the best value for sin(0.01)
in double precision we can give at the moment.
Here's another example, far from the singular point and the accuracy is low:
x = Fun(identity,[1e2,1e4])
f = besselj0(x)
g = Fun(besselj0,[1e2,1e4])
f[500]-besselj0(500),g[500]-besselj0(500)
The exponential works fine, but is it mentioned anywhere that for functions on an exponential scale only absolute accuracy is possible?
x = Fun(identity)
f = exp(x)
f[1]-exp(1),f[-1]-exp(-1) # absolute
f[1]/exp(1)-1,f[-1]/exp(-1)-1 # relative
f = exp(5x)
f[1]-exp(5),f[-1]-exp(-5) # absolute
f[1]/exp(5)-1,f[-1]/exp(-5)-1 # relative ==> loss of about log10(maxabs(f)/minabs(f)) digits
It might be a case of not the right number of boundary conditions: having a singular point in an ODE means the answer isn’t necessarily the same as the order.
I had a preprint in progress on solving singular ODEs but got stuck. But you can see instability by inverting the finite section of the operator and checking the norm of the inverse.
On 16 Jan 2015, at 3:28 am, Richard Mikael Slevinsky notifications@github.com wrote:
The exponential works fine, but is it mentioned anywhere that for functions on an exponential scale only absolute accuracy is possible?
x = Fun(identity) f = exp(x) f[1]-exp(1),f[-1]-exp(-1) # absolute f[1]/exp(1)-1,f[-1]/exp(-1)-1 # relative f = exp(5x) f[1]-exp(5),f[-1]-exp(-5) # absolute f[1]/exp(5)-1,f[-1]/exp(-5)-1 # relative ==> loss of about log10(maxabs(f)/minabs(f)) digits — Reply to this email directly or view it on GitHub https://github.com/ApproxFun/ApproxFun.jl/issues/121#issuecomment-70241234.
This has been reverted to cos(f::Fun) = real(exp(im*f))
and sin(f::Fun) = imag(exp(im*f))
, right?
I think so
On 15 Apr 2015, at 7:02 am, Richard Mikael Slevinsky notifications@github.com wrote:
This has been reverted to cos(f::Fun) = real(exp(im_f)) and sin(f::Fun) = imag(exp(im_f)), right?
— Reply to this email directly or view it on GitHub https://github.com/ApproxFun/ApproxFun.jl/issues/121#issuecomment-93059162.
This seems to be caused by large f/derivatives of f:
x=Fun(identity,[0.,100.])
f=x^2
g=chop(imag(f),eps())
xmin,xmax=first(domain(g)),last(domain(g))
D=Derivative(space(x))
B=[Evaluation(space(f),xmin),Evaluation(space(f),xmax)]
u=[B;(f'*D^2-f''*D+f'^3)]\[sin(f(xmin));sin(f(xmax))]
u(0.5)-sin(.5^2) # approx 1E-10
f=sin(x)
u=[B;(f'*D^2-f''*D+f'^3)]\[sin(f(xmin));sin(f(xmax))]
u(0.5)-sin(sin(.5)) # approx 1E=13
This is fixed, and the solution is annoyingly simple and obvious: just project to the canonical domain. I'm not sure why it actually made such a big difference...
The readme test had to have tolerance weakened
x = Fun(identity,[0.,10.]) f = sin(x^2) g = cos(x)
@test_approx_eq_eps f[.1] sin(.1^2) 1000eps()
Changing 10. to be larger causes more inaccuracy. Oddly, the result satisfies the ODE better than Fun(x->sin(x^2),[0,10]), not sure I understand what's going on...I guess the BVP solved is unstable, probably due to singular point.