Open oyamad opened 6 years ago
I agree that it is a better fit for BasisMatrices.jl
There is an existing Interpoland type that does some of what your Interp does, but doesn't store everything as a field (e.g. it doesn't store properties like length
and size
and would have you just call a method to evaluate those when needed).
I'd be happy to iterate with you and extend that type to fill all your needs here.
That would be nice. What I really need are basis
, S
, Phi
, and Phi_lu
(and maybe Scoord
).
I don't do this right now, but we could store S
, and Scoord
.
I do store a version of Phi
evaluated in Tensor
form. I'd like to see how you are using it -- maybe we don't need to store Phi
(which is the Expanded
form of what I currently store).
Similarly for Phi_lu
, I have found that finding new coefficients based directly off of the Tensor form basis matrix is often faster than precomputing+reusing the LU decomposition of the Expanded BasisMatrix. Here's a 3d example:
julia> using BasisMatrices, BenchmarkTools
julia> b = Basis(SplineParams(25, -1, 1, 1),SplineParams(20, -1, 2, 3), SplineParams(20, -3, 3, 3));
julia> Phi_tensor = BasisMatrix(b, Tensor, nodes(b)[2]);
julia> Phi = convert(Expanded, Phi_tensor);
julia> Phi_lu = lufact(Phi.vals[1]);
julia> y = rand(length(b));
julia> @benchmark Phi_lu \ y
BenchmarkTools.Trial:
memory estimate: 756.50 KiB
allocs estimate: 6
--------------
minimum time: 6.108 ms (0.00% GC)
median time: 6.382 ms (0.00% GC)
mean time: 6.528 ms (0.00% GC)
maximum time: 8.466 ms (0.00% GC)
--------------
samples: 761
evals/sample: 1
julia> @benchmark funfitxy(b, Phi_tensor, y)
BenchmarkTools.Trial:
memory estimate: 2.90 MiB
allocs estimate: 6579
--------------
minimum time: 1.610 ms (0.00% GC)
median time: 1.772 ms (0.00% GC)
mean time: 1.986 ms (8.00% GC)
maximum time: 4.741 ms (35.18% GC)
--------------
samples: 2513
evals/sample: 1
On the other hand, there are times when it is definitely faster to use the lu fact (same example, using just the first two dimensions):
julia> b = Basis(SplineParams(25, -1, 1, 1),SplineParams(20, -1, 2, 3));
julia> Phi_tensor = BasisMatrix(b, Tensor, nodes(b)[2]);
julia> Phi = convert(Expanded, Phi_tensor);
julia> Phi_lu = lufact(Phi.vals[1]);
julia> y = rand(length(b));
julia> @benchmark Phi_lu \ y
BenchmarkTools.Trial:
memory estimate: 34.69 KiB
allocs estimate: 4
--------------
minimum time: 21.846 μs (0.00% GC)
median time: 22.484 μs (0.00% GC)
mean time: 23.865 μs (1.21% GC)
maximum time: 977.398 μs (89.64% GC)
--------------
samples: 10000
evals/sample: 1
julia> @benchmark funfitxy(b, Phi_tensor, y)
BenchmarkTools.Trial:
memory estimate: 170.13 KiB
allocs estimate: 346
--------------
minimum time: 91.221 μs (0.00% GC)
median time: 99.814 μs (0.00% GC)
mean time: 123.845 μs (8.72% GC)
maximum time: 3.180 ms (58.89% GC)
--------------
samples: 10000
evals/sample: 1
And once again using just the first dimension
julia> b = Basis(SplineParams(25, -1, 1, 1));
julia> Phi_tensor = BasisMatrix(b, Tensor, nodes(b)[2]);
julia> Phi = convert(Expanded, Phi_tensor);
julia> Phi_lu = lufact(Phi.vals[1]);
julia> y = rand(length(b));
julia> @benchmark Phi_lu \ y
BenchmarkTools.Trial:
memory estimate: 1.86 KiB
allocs estimate: 4
--------------
minimum time: 933.016 ns (0.00% GC)
median time: 1.157 μs (0.00% GC)
mean time: 1.267 μs (6.23% GC)
maximum time: 30.825 μs (90.51% GC)
--------------
samples: 10000
evals/sample: 62
julia> @benchmark funfitxy(b, Phi_tensor, y)
BenchmarkTools.Trial:
memory estimate: 42.60 KiB
allocs estimate: 89
--------------
minimum time: 20.986 μs (0.00% GC)
median time: 23.636 μs (0.00% GC)
mean time: 30.564 μs (9.06% GC)
maximum time: 3.731 ms (46.25% GC)
--------------
samples: 10000
evals/sample: 1
We should probably be more scientific about these ideas, do some benchmarks to see when it becomes more efficient to use the Tensor form compared to precomputing the lu factorization. I would imagine that it is the size of the basis matrix and maybe also the sparsity pattern.
Thanks! I have only used the Expanded
form (which is "simplest conceptually"), and have to learn about the Tensor
form.
Phi
is used in evaluate_policy!
to compute Phi - discount * EV
, where EV
is the matrix whose (i, j) entry is
E_e[ phi_j(g(s_i, X[s_i],e)) ]
(phi_j is the j-th basis function and s_i is the i-th node).
Useful pieces of information about interpolation are stored in
Interp
:And
ContinuousDP
keeps an instance ofInterp
as one of its fields.(1) Conceptually, this is not part of a problem, but part of a solution algorithm.
(2) One option is to include it in a "
Solver
" type, like(3)
Interp
should not necessarily belong here. It seems to fit better inBasisMatrices.jl
. Any thoughts @sglyon?