QuantEcon / SimpleDifferentialOperators.jl

Library for simple upwind finite differences
MIT License
12 stars 2 forks source link

Test out the linear solves with the bandedblockbanded #185

Closed jlperla closed 5 years ago

jlperla commented 5 years ago

Check speed of sparse vs. bandedblockbanded See code in https://github.com/JuliaMatrices/BlockBandedMatrices.jl/issues/37#issuecomment-493765559 to convert.

@btime $A \ $x

Try the matrix-vector multiplication as well.

chiyahn commented 5 years ago

Sparase matrices are faster:

using SimpleDifferentialOperators, BenchmarkTools, SparseArrays, LinearAlgebra

# model setup
M = 100
μ_1 = -0.1
μ_2 = -0.15
σ = 0.1
ρ = 0.05
x̄ = range(0, 1, length = (M+2))
x = interiornodes(x̄)
bc = (Reflecting(), Reflecting())
b = [x.^3; x.^3]

# construct operators and intensity matrix
L_1 = I * ρ - (μ_1*L₁₋bc(x̄, bc) + σ^2 / 2 * L₂bc(x̄, bc))
L_2 = I * ρ - (μ_2*L₁₋bc(x̄, bc) + σ^2 / 2 * L₂bc(x̄, bc))
Q = [-0.1 0.1; 0.2 -0.2]

# construct the corresponding joint operator
L = jointoperator_bc((L_1, L_2), Q);

# sparsify L
L_sparse = sparse(L);

Running the code yields:


julia> @btime $L \ $b
  6.368 ms (22 allocations: 742.11 KiB)
200-element Array{Float64,1}:
 0.0155463913078714
 0.015568034752824894
 0.015615464005895772
 0.015693402152223573
 0.01580715295223779
 0.015962600845248226
 0.016166211004023635
 0.016425029441059565
 0.016746683167219657
 ⋮
 0.9113643934051219
 0.9409817582021364
 0.9697069215631581
 0.9969131626697918
 1.021761825522767
 1.0431358369515793
 1.059553094990427
 1.0690537632642085

julia> @btime $L_sparse \ $b
  141.600 μs (67 allocations: 280.63 KiB)
200-element Array{Float64,1}:
 0.015546391307920395
 0.015568034752874072
 0.015615464005945288
 0.015693402152273574
 0.015807152952288406
 0.015962600845299574
 0.016166211004075816
 0.016425029441112668
 0.01674668316727376
 ⋮
 0.9113643934047219
 0.9409817582017291
 0.9697069215627443
 0.9969131626693717
 1.021761825522341
 1.0431358369511483
 1.0595530949899932
 1.0690537632637733

The result is more dramatic if I increase the size of grids to M = 1000:

julia> @btime $L \ $b
  622.102 ms (22 allocations: 62.17 MiB)
2000-element Array{Float64,1}:
 0.010984999578321965
 0.010985158563388957
 0.01098547970777865
 0.01098596624589397
 0.01098662147228258
 0.010987448741634447
 0.010988451468779472
 0.010989633128685141
 0.010990997256454249
 ⋮
 1.0920824474529656
 1.0928808701817014
 1.0935767405163612
 1.0941664186309965
 1.0946461505342333
 1.0950120645941317
 1.0952601679590452
 1.095386342871432

julia> @btime $L_sparse \ $b
  1.733 ms (69 allocations: 2.58 MiB)
2000-element Array{Float64,1}:
 0.010984999513431591
 0.010985158498499589
 0.010985479642891267
 0.010985966181009534
 0.010986621407402032
 0.010987448676758719
 0.010988451403909473
 0.010989633063821759
 0.010990997191598356
 ⋮
 1.0920824473109902
 1.0928808700397004
 1.0935767403743388
 1.0941664184889568
 1.0946461503921818
 1.09501206445207
 1.0952601678169762
 1.0953863427293589