mfherbst / ReducedBasis.jl

Reduced basis methods for parametrised eigenproblems
https://mfherbst.github.io/ReducedBasis.jl/stable/
MIT License
14 stars 0 forks source link

Make `compress` less memory-intensive #45

Closed pbrehmer closed 1 year ago

pbrehmer commented 1 year ago

In its current implementation, the compression of AffineDecompositions can suffer from a sharp increase in memory usage since in

function compress(op, snapshots::AbstractVector)
    overlap_matrix(snapshots, map(Ψ -> op * Ψ, snapshots))
end

the operators are applied to the snapshots all at once, which basically doubles the memory usage in this step because the snapshots are the largest objects in memory.

We should improve this by applying op to one element of snapshots at a time, to cut down the memory usage. A naive implementation would be something like:

function compress(op, snapshots::AbstractVector)
    M = length(snapshots)
    matrixel = Matrix{Number}(undef, M, M)
    for j in 1:M, i in 1:M
        applied = op * snapshots[j]
        matrixel[i, j] = dot(snapshots, applied)
    end
    promote_type.(matrixel)
end

Again, what would be the best way to avoid the promote_type.(matrixel) business, i.e. inferring the type before computing the elements of matrixel?

mfherbst commented 1 year ago

Why not have overlap_matrix take a function, which is applied to the ket in the inner loop before computing the overlap one at a time (a bit like what sum does if you pass a function as first arg). Sonds to me likea useful primitive to have.

pbrehmer commented 1 year ago

That sounds like a good idea! I'll open a PR in the next few days.

pbrehmer commented 1 year ago

This was addressed and implemented in PR #46, so I will close the issue.