JuliaApproximation / ClassicalOrthogonalPolynomials.jl

A Julia package for classical orthogonal polynomials and expansions
MIT License
38 stars 6 forks source link

NaN results from Jacobi expansion #67

Open putianyi889 opened 3 years ago

putianyi889 commented 3 years ago
julia> @time Chebyshev()\JacobiWeight(0,0.5)
  0.027141 seconds (3.08 k allocations: 3.701 MiB)
ℵ₀-element LazyArrays.CachedArray{Float64, 1, Vector{Float64}, FillArrays.Zeros{Float64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf():
  0.9003163161786569
  0.600210877394969
 -0.12004217544451247
  0.05144664659444724
 -0.028581470311092174
  ⋮

julia> @time Jacobi(-0.5,-0.5)\JacobiWeight(0,0.5)
 14.685838 seconds (3.11 k allocations: 4.702 MiB)
ℵ₀-element LazyArrays.CachedArray{Float64, 1, Vector{Float64}, FillArrays.Zeros{Float64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf():
 NaN
 NaN
 NaN
 NaN
 NaN
   ⋮

julia> @time Legendre()\JacobiWeight(0,0.5)
 15.032195 seconds (3.09 k allocations: 4.702 MiB)
ℵ₀-element LazyArrays.CachedArray{Float64, 1, Vector{Float64}, FillArrays.Zeros{Float64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf():
  0.9428090415820628
  0.5656854249492391
 -0.13468700594029615
  0.0628539361054728
 -0.036732819801901045
  ⋮

julia> @time Jacobi(0,0)\JacobiWeight(0,0.5)
 14.450656 seconds (3.11 k allocations: 4.702 MiB)
ℵ₀-element LazyArrays.CachedArray{Float64, 1, Vector{Float64}, FillArrays.Zeros{Float64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}} with indices OneToInf():
 NaN
 NaN
 NaN
 NaN
 NaN
   ⋮
putianyi889 commented 4 days ago

The problem is fixed at present. However, if the Jacobi expansion is executed twice without gc, I get OutOfMemoryError on my machine. @dlfivefifty this issue could be closed if the error is expected.

julia> GC.gc()

julia> @time Jacobi(-0.5,-0.5)\JacobiWeight(0,0.5)
  0.479950 seconds (423 allocations: 45.424 GiB, 1.04% gc time)
vcat(77775-element Vector{Float64}, ℵ₀-element FillArrays.Zeros{Float64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}} with indices OneToInf()) with indices OneToInf():
  0.9003163161571902
  1.200421754875805
 -0.3201124679665222
  ⋮

julia> @time Jacobi(-0.5,-0.5)\JacobiWeight(0,0.5)
ERROR: OutOfMemoryError()
Stacktrace:
  [1] GenericMemory
    @ .\boot.jl:516 [inlined]
  [2] new_as_memoryref
    @ .\boot.jl:535 [inlined]
  [3] Array
    @ .\boot.jl:582 [inlined]
  [4] hankel_partialchol(v::Vector{Float64})
    @ FastTransforms C:\Users\pty\.julia\packages\FastTransforms\0RK3x\src\toeplitzhankel.jl:66
  [5] _jac2jacTH_TLC(::Type{Float64}, mn::Tuple{Int64}, α::Float64, β::Float64, γ::Float64, δ::Float64, d::Int64)
    @ FastTransforms C:\Users\pty\.julia\packages\FastTransforms\0RK3x\src\toeplitzhankel.jl:457
  [6] _good_plan_th_jac2jac!(::Type{Float64}, mn::Tuple{Int64}, α::Float64, β::Float64, γ::Float64, δ::Float64, dims::Int64)
    @ FastTransforms C:\Users\pty\.julia\packages\FastTransforms\0RK3x\src\toeplitzhankel.jl:475
  [7] plan_th_jac2jac!(::Type{Float64}, mn::Tuple{Int64}, α::Float64, β::Float64, γ::Float64, δ::Float64, dims::Int64)
    @ FastTransforms C:\Users\pty\.julia\packages\FastTransforms\0RK3x\src\toeplitzhankel.jl:494
  [8] plan_th_cheb2jac!
    @ C:\Users\pty\.julia\packages\FastTransforms\0RK3x\src\toeplitzhankel.jl:728 [inlined]
  [9] th_cheb2jac
    @ C:\Users\pty\.julia\packages\FastTransforms\0RK3x\src\toeplitzhankel.jl:730 [inlined]
 [10] transform_ldiv
    @ C:\Users\pty\.julia\dev\ClassicalOrthogonalPolynomials\src\classical\jacobi.jl:300 [inlined]
 [11] copy
    @ C:\Users\pty\.julia\packages\ContinuumArrays\Py5Q6\src\bases\bases.jl:119 [inlined]
 [12] materialize
    @ C:\Users\pty\.julia\packages\ArrayLayouts\48qDX\src\ldiv.jl:22 [inlined]
 [13] ldiv
    @ C:\Users\pty\.julia\packages\ArrayLayouts\48qDX\src\ldiv.jl:98 [inlined]
 [14] \
    @ C:\Users\pty\.julia\packages\QuasiArrays\UD7Ge\src\matmul.jl:34 [inlined]
 [15] macro expansion
    @ .\timing.jl:581 [inlined]
 [16] top-level scope
    @ .\REPL[9]:1
dlfivefifty commented 2 days ago

The Toeplitz Hankel transforms are not very memory friendly due to a foolish decision on my part to transform a tensor all at once

If we rewrite them to reuse memory it should be fine