JuliaApproximation / ApproxFun.jl

Julia package for function approximation
http://juliaapproximation.github.io/ApproxFun.jl/
Other
536 stars 71 forks source link

Provide function to get quadrature weights? #316

Open stevengj opened 8 years ago

stevengj commented 8 years ago

We have been using ApproxFun to do some optimization of a problem involving an integral, where the degrees of freedom are the values of an unknown function at Chebyshev points. ApproxFun then does integrals and interpolations for us, which is very convenient! However, in order to compute the gradient of our objective, which involves integrals, for passing to NLopt, it turns out that we need to know the quadrature weights that ApproxFun uses for each Chebyshev point, effectively the Clenshaw-Curtis quadrature weights.

I looked into the ApproxFun code, but since it computes integrals somewhat indirectly via cumsum, it seemed quite difficult to extract the (implicit) quadrature weights. Instead, we wrote our own function, based on my understanding of what ApproxFun does and Clenshaw-Curtis integration:

"""
Return the quadrature weights for the N chebyshev points on the interval (-1,1),
corresponding to the points that ApproxFun returns for `points(Space[-1,-1], N)`.

Evaluating a function at those quadrature points, multiplying by the weights, and
summing is equivalent to computing the ApproxFun integral given the values at those
points.
"""
function chebweights(N)
    a = zeros(N)
    a[1:2:N] = (2/N) ./ (1 - ((1:2:N) - 1).^2)
    return FFTW.r2r(a, FFTW.REDFT01)
end

We've verified numerically that this works (it gives exactly the same result as ApproxFun's quadrature to machine precision, as far as I can tell). For example, here is the integral of exp(sin(20x)) using 10 points:

N = 10
d = Space([-1,1])
y = exp(sin(20*points(d, N)))
cheb = sum(y .* chebweights(N))
afun = sum(Fun(ApproxFun.transform(d, y), d))
(afun - cheb) / afun

gives agreement to machine precision.

Would you be interested in providing a function like this in ApproxFun? Presumably you would want a function of the form quadweights(d::Space, N::Integer) to mirror points(d, N).

cc: @rpestourie

dlfivefifty commented 8 years ago

That's a good idea, though maybe a more natural home is in FastGaussQuadrature.jl? @ajt60gaibb

The name chebweights could be confusing since one may expect it to return Gauss–Chebyshev weights: the quadrature weights corresponding to

$$\int_{-1}^1 f(x)/sqrt(1-x^2) dx$$

(i.e., trapezoidal rule in disguise).

I'll have to think about quadweights(d::Space, N::Integer): would quadweights(Chebyshev(),10) return Clenshaw–Curtis weights or Gauss–Chebyshev weights? I guess it could implicitly be assumed to be Legendre weight for all PolynomialSpace, where WeightSpace is used for other weights:

quadweights(Chebyshev(),10)  # Returns Clenshaw–Curtis weight
quadweights(Legendre(),10)  # Returns Gauss–Legendre weights
quadweights(JacobiWeight(-0.5,-0.5,Chebyshev()),10)  # Returns Gauss–Chebyshev weights
quadweights(JacobiWeight(-0.5,-0.5,Legendre()),10)  # Returns the quadrature rule corresponding to 1/sqrt(1-x^2) but with Legendre points, if its possible to construct
dlfivefifty commented 8 years ago

PS I'm sure you have your reasons for making the unknowns the values, but if you were to make the unknowns the Chebyshev coefficients you can avoid the extra FFT call. Even Chebfun has started to move away from working with values to working with coefficients.

MikaelSlevinsky commented 8 years ago

The repo FastTransforms has the two un-exported functions clenshawcurtis and clenshawcurtisweights. The functions also provide Clenshaw-Curtis quadrature (nodes and) weights for the Jacobi weight (1-x)^α(1+x)^β. The package is already required in developmental ApproxFun.

julia> using FastTransforms

julia> FastTransforms.clenshawcurtis(10,0.,0.)
([1.0,0.9396926207859084,0.766044443118978,0.5,0.17364817766693041,-0.17364817766693041,-0.4999999999999999,-0.766044443118978,-0.9396926207859083,-1.0],[0.012345679012345678,0.11656745657203713,0.22528432333810441,0.30194003527336855,0.3438625058041441,0.3438625058041442,0.30194003527336855,0.22528432333810441,0.11656745657203713,0.012345679012345678])

julia> FastTransforms.clenshawcurtisweights(10,0.,0.)
10-element Array{Float64,1}:
 0.0123457
 0.116567 
 0.225284 
 0.30194  
 0.343863 
 0.343863 
 0.30194  
 0.225284 
 0.116567 
 0.0123457
MikaelSlevinsky commented 8 years ago

Sorry, your chebweights function above corresponds to Fejer's first quadrature rule:

julia> using FastTransforms

julia> FastTransforms.fejer1(10,0.,0.)
([0.9876883405951378,0.8910065241883678,0.7071067811865476,0.45399049973954675,0.15643446504023087,-0.15643446504023087,-0.45399049973954675,-0.7071067811865476,-0.8910065241883678,-0.9876883405951378],[0.042939119574130796,0.14587491937739094,0.2203174603174603,0.2808792186638755,0.30998928206714255,0.30998928206714255,0.2808792186638755,0.2203174603174603,0.14587491937739094,0.042939119574130796])

julia> FastTransforms.fejerweights1(10,0.,0.)
10-element Array{Float64,1}:
 0.0429391
 0.145875 
 0.220317 
 0.280879 
 0.309989 
 0.309989 
 0.280879 
 0.220317 
 0.145875 
 0.0429391
ajt60gaibb commented 8 years ago

@dlfivefifty: I believe you only get the FFT saving when you make an ApproxFun.Fun. Assuming @stevengj and @rpestourie are in nonadaptive mode and outside of ApproxFun, then at some point there is precisely one FFT to be done: either on function values or to calculate the weights. Yes, I believe Chebfun essentially always uses coefficients at this point (I cannot think of an exception).

So far the FastGaussQuadrature.jl is very streamline in the sense that it only does standard Gauss quadratures. Of course, I am happy to add Clenshaw-Curtis rules too. I prefer the names fejer1, fejer2, fejer3, and fejer4 to distinguish between the four (unweighted) Clenshaw-Curtis rules. It looks like @MikaelSlevinsky has two of those already in FastTransforms, though.

stevengj commented 8 years ago

@dlfivefifty, the basic reason for using the values at the nodes rather than the coefficients is that we need to constrain the values at the nodes to lie within some [min,max] range, and by making the degrees of freedom the nodal values we can do this by a simple bound (box) constraint, which is very efficient in many nonlinear optimization algorithms. If we instead optimize over the coefficients, then we need a more general affine constraint, which is much less efficient, and often will be violated by intermediate steps in many nonlinear optimization algorithms. (The cost of any FFT is negligible for us.)

@MikaelSlevinsky, you are right that this is technically Fejer's first rule; we wanted it to correspond exactly to the set of points and polynomial interpolation that ApproxFun uses. (I just tend to think of these all as variants of Clenshaw-Curtis because that's where I learned it first, but of course Fejer actually came first.)

The reason for wanting something like this in ApproxFun itself is that we want a guarantee that it is the same quadrature rule as the one used in sum(f::Fun). Having it in a separate package will risk getting them out of sync.

dlfivefifty commented 8 years ago

Ok makes sense to have them exported from ApproxFun via a "quadrule(::Space,n)" command to ensure things are kept in sync

Sent from my iPhone

On 8 Apr 2016, at 01:18, Steven G. Johnson notifications@github.com wrote:

@dlfivefifty, the basic reason for using the values at the nodes rather than the coefficients is that we need to constrain the values at the nodes to lie within some [min,max] range, and by making the degrees of freedom the nodal values we can do this by a simple bound constraint, which is very efficient in many nonlinear optimization algorithms. If we instead optimize over the coefficients, then we need a more general affine constraint, which is much less efficient, and often will be violated by intermediate steps in many nonlinear optimization algorithms.

@MikaelSlevinsky, you are right that this is technically Fejer's first rule; we wanted it to correspond exactly to the set of points that ApproxFun uses.

The reason for wanting something like this in ApproxFun itself is that we want a guarantee that it is the same quadrature rule as the one used in sum(f::Fun). Having it in a separate package will risk getting them out of sync.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub