Closed SKopecz closed 4 months ago
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 98.32%. Comparing base (
431dfd9
) to head (8420fee
). Report is 2 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
There are failing tests in the "Linear advection" testset, but I don't understand why these fail.
In the VScode Repl, when I execute the code line by line, everything works fine.
But when this code is put in a @testset
the allocation tests fail, see below. In the current state of this PR the allocation tests for "Linear advection" are commented out.
Any idea what's going wrong @ranocha?
julia> @testset "Linear advection" begin
# Linear advection discretized with finite differences and upwind, periodic boundary conditions
# number of nodes
N = 1000
u0 = sin.(π * LinRange(0.0, 1.0, N + 1))[2:end];
tspan = (0.0, 1.0)
# in-place syntax for f
function fdupwind!(du, u, p, t)
N = length(u)
dx = 1 / N
du[1] = -(u[1] - u[N]) / dx
for i in 2:N
du[i] = -(u[i] - u[i - 1]) / dx
end
end
fdupwind_f = ODEProblem(fdupwind!, u0, tspan);
# in-place sytanx for PDS
function fdupwindP!(P, u, p, t)
P .= 0.0
N = length(u)
dx = 1 / N
P[1, N] = u[N] / dx
for i in 2:N
P[i, i - 1] = u[i - 1] / dx
end
return nothing
end
function fdupwindP!(P::SparseMatrixCSC, u, p, t)
N = length(u)
dx = 1 / N
values = nonzeros(P)
for col in axes(P, 2)
for idx in nzrange(P, col)
values[idx] = u[col] / dx
end
end
return nothing
end
function fdupwindD!(D, u, p, t)
D .= 0.0
return nothing
end
# problem with dense matrices
fdupwind_PDS_dense = PDSProblem(fdupwindP!, fdupwindD!, u0, tspan);
# problem with sparse matrices
p_prototype = spdiagm(-1 => ones(eltype(u0), N - 1), N - 1 => ones(eltype(u0), 1))
d_prototype = zero(u0)
fdupwind_PDS_sparse = PDSProblem(fdupwindP!, fdupwindD!, u0, tspan;
p_prototype = p_prototype,
d_prototype = d_prototype);
fdupwind_PDS_sparse_2 = PDSProblem{true}(fdupwindP!, fdupwindD!, u0, tspan;
p_prototype = p_prototype,
d_prototype = d_prototype);
fdupwind_ConsPDS_sparse = ConservativePDSProblem(fdupwindP!, u0, tspan;
p_prototype = p_prototype);
fdupwind_ConsPDS_sparse_2 = ConservativePDSProblem{true}(fdupwindP!, u0, tspan;
p_prototype = p_prototype);
# solutions
sol_fdupwind_f = solve(fdupwind_f, Tsit5());
sol_fdupwind_PDS_dense = solve(fdupwind_PDS_dense, Tsit5());
sol_fdupwind_PDS_sparse = solve(fdupwind_PDS_sparse, Tsit5());
sol_fdupwind_PDS_sparse_2 = solve(fdupwind_PDS_sparse_2, Tsit5());
sol_fdupwind_ConsPDS_sparse = solve(fdupwind_ConsPDS_sparse, Tsit5());
sol_fdupwind_ConsPDS_sparse_2 = solve(fdupwind_ConsPDS_sparse_2, Tsit5());
# check equality of solutions
@test sol_fdupwind_f.t ≈ sol_fdupwind_PDS_dense.t ≈
sol_fdupwind_PDS_sparse.t ≈ sol_fdupwind_PDS_sparse_2.t ≈
sol_fdupwind_ConsPDS_sparse.t ≈ sol_fdupwind_ConsPDS_sparse_2.t
@test sol_fdupwind_f.u ≈ sol_fdupwind_PDS_dense.u ≈
sol_fdupwind_PDS_sparse.u ≈ sol_fdupwind_PDS_sparse_2.u ≈
sol_fdupwind_ConsPDS_sparse.u ≈ sol_fdupwind_ConsPDS_sparse_2.u
# Check that we really do not use too many additional allocations
alloc1 = @allocated(solve(fdupwind_f, Tsit5()))
alloc2 = @allocated(solve(fdupwind_PDS_dense, Tsit5()))
alloc3 = @allocated(solve(fdupwind_PDS_sparse, Tsit5()))
alloc4 = @allocated(solve(fdupwind_PDS_sparse_2, Tsit5()))
alloc5 = @allocated(solve(fdupwind_ConsPDS_sparse, Tsit5()))
alloc6 = @allocated(solve(fdupwind_ConsPDS_sparse_2, Tsit5()))
@test 0.95 < alloc1 / alloc2 < 1.05
@test 0.95 < alloc1 / alloc3 < 1.05
@test 0.95 < alloc1 / alloc4 < 1.05
@test 0.95 < alloc1 / alloc5 < 1.05
@test 0.95 < alloc1 / alloc6 < 1.05
end
Linear advection: Test Failed at /home/kopecz/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:232
Expression: 0.95 < alloc1 / alloc2 < 1.05
Evaluated: 0.95 < 1.378686729065512 < 1.05
Stacktrace:
[1] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:232 [inlined]
[3] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
[4] top-level scope
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:154
Linear advection: Test Failed at /home/kopecz/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:233
Expression: 0.95 < alloc1 / alloc3 < 1.05
Evaluated: 0.95 < 2.886397172466229 < 1.05
Stacktrace:
[1] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:233 [inlined]
[3] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
[4] top-level scope
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:154
Linear advection: Test Failed at /home/kopecz/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:234
Expression: 0.95 < alloc1 / alloc4 < 1.05
Evaluated: 0.95 < 2.886397172466229 < 1.05
Stacktrace:
[1] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:234 [inlined]
[3] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
[4] top-level scope
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:154
Linear advection: Test Failed at /home/kopecz/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:235
Expression: 0.95 < alloc1 / alloc5 < 1.05
Evaluated: 0.95 < 2.886397172466229 < 1.05
Stacktrace:
[1] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:235 [inlined]
[3] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
[4] top-level scope
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:154
Linear advection: Test Failed at /home/kopecz/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:236
Expression: 0.95 < alloc1 / alloc6 < 1.05
Evaluated: 0.95 < 2.886397172466229 < 1.05
Stacktrace:
[1] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:236 [inlined]
[3] macro expansion
@ ~/.julia/juliaup/julia-1.10.0+0.x64.linux.gnu/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
[4] top-level scope
@ ~/Schreibtisch/Code/PositiveIntegrators.jl/run/run.jl:154
Test Summary: | Pass Fail Total Time
Linear advection | 2 5 7 27.2s
ERROR: Some tests did not pass: 2 passed, 5 failed, 0 errored, 0 broken.
Any idea what's going wrong
It seems to be related where the functions are defined. I can reproduce the behavior you describe locally - without @testset
, everything is fine. With @testset
, the number of allocations increases a lot.
Apparently, you can fix this by moving the function definitions outside of the testset.
Apparently, you can fix this by moving the function definitions outside of the testset.
Great, works for me as well! I'll change this.
experimental_orders_of_convergence
to avoid excessive duplicating of codeprob_pds_robertson
with nonadaptive schemes. Since there have been a few bug fixes of the coefficients - maybe it would make sense to add a few tests of non-autonomous systems of the form $u'(t) = p * t^{p - 1}$, $p \in \mathbb{N}$? We should be able to solve them exactly up to the order of the method.
First of all, MPRK schemes don't solve $u'(t) = q * t^{q - 1}$ exactly!
I used the nonconservative PDS
P(u,p,t) = \begin{pmatrix} q \, t^{q - 1} & 0\\ 0 & 0\end{pmatrix},\quad D(u,p,t)= \begin{pmatrix} 0\\0\end{pmatrix}
for which the MPRK schemes reduce to the underlying RK scheme, since production terms on the diagonal of $P$ are not weighted, to check that the underlying RK schemes integrate $u'(t) = p * t^{p - 1}$ exactly.
Furthermore, I found that MPE
solves
u'(t) = -\frac{a_0\,b_1}{(b_0 + b_1 \,t)^2}\quad\implies u(t)=\frac{a_0}{b_0+b_1 t}
exactly. To test this I used the nonconservative PDS
P(u,p,t) = \begin{pmatrix} 0 & 0\\ 0 & 0\end{pmatrix},\quad D(u,p,t)= \begin{pmatrix}\frac{a_0\,b_1}{(b_0 + b_1\,t)^2} \\0\end{pmatrix}.
At least MPRK22(1.0)
doesn't solve the same equation exactly. It is completely unclear to me if there exists PDS which the higher order MPRK schemes can solve exactly.
Another question for the future is, if we want to be able to solve scalar PDSProblems
? For those $P$ would formally be a 1x1 matrix and $D$ a 1x1 vector. If yes, this would simplify both PDS from above, where the second row is unnecessary.
Another question for the future is, if we want to be able to solve scalar PDSProblems? For those would formally be a 1x1 matrix and a 1x1 vector. If yes, this would simplify both PDS from above, where the second row is unnecessary.
I think this would be fine as a special case
Since there have been a few bug fixes of the coefficients - maybe it would make sense to add a few tests of non-autonomous systems of the form u′(t)=p∗tp−1, p∈N? We should be able to solve them exactly up to the order of the method.
First of all, MPRK schemes don't solve u′(t)=q∗tq−1 exactly!
I used the nonconservative PDS
for which the MPRK schemes reduce to the underlying RK scheme, since production terms on the diagonal of P are not weighted, to check that the underlying RK schemes integrate u′(t)=p∗tp−1 exactly.
Sorry, I wasn't clear - this is basically what I had in mind. Thanks a lot for doing all the work 👍
I think this would be fine as a special case
I created issue #95 for this.
MPRK22(a)
andSSPMPRK22(0,a)
for nonconservative PDS.small_consant
.