project-asgard / DG-SparseGrid

Matlab implementation of ASGarD
1 stars 2 forks source link

Matrix exponential - optimization #69

Open bmcdanie opened 3 years ago

bmcdanie commented 3 years ago

From @efdazedo :

I think computing matrix exponential may require full eigen decomposition of the matrix.

Expm(A) can be computed as

inv(V) diag( exp( lambda(i) ) V, where inv(V) diag( lambda(i) ) V is the eigen decomposition of matrix A

Thus, if the matrix is the same or the eigenvectors are the same and eigen values change in a known manner, then one may consider precomputing and reusing the already computed value of expm(A*dt) or the eigen decomposition (in line 211).

dlg0 commented 3 years ago

The matrix exponential has the advantage that you only need to take one time step. It's main cost is when we have a non-zero source term or boundary condition. In the fixing-fp2-complete branch the following code can be seen in time_advance.m

%% Matrix Exponential
function f1 = matrix_exponential(pde,opts,A_data,f0,t,dt,deg,hash_table,Vmax,Emax)

applyLHS = ~isempty(pde.termsLHS);

if applyLHS
    [~,A,ALHS] = apply_A(pde,opts,A_data,f0,deg);
    f1 = expm(dt*(ALHS\A))*f0;
else
    [~,A,ALHS] = apply_A(pde,opts,A_data,f0,deg);
    f1 = expm(A*dt)*f0;
end

    function ret = rhs_integrand(u)
        s = source_vector(pde,opts,hash_table,u);
        bc = boundary_condition_vector(pde,opts,hash_table,u);
        uu = t+dt-u;
        if applyLHS
            ret = ALHS\(expm(uu*(ALHS\A)) * (s+bc));
        else
            ret = expm(uu*A) * (s+bc);
        end
    end

rhs_integral = integral(@rhs_integrand,t,t+dt,'ArrayValued',true);%,'RelTol',1e-5,'AbsTol',1e-5);
f1 = f1 + rhs_integral;

where the main cost is in the call to integral and the argument to the expm() changes every call.

Another thing to consider is that we only really include the matrix exponential to check the other time stepping methods, and it's OK for it to be expensive, so I'd not likely spend a lot of time trying to make it faster unless there is a simple approach to doing so.

efdazedo commented 3 years ago

If matrix A is the same (unchanged) and "uu" is a scalar, then if eigen decomposition is computed as [V,D] = eig(A), then A ~ V D inv(V), D is a diagonal matrix

Then expm( uuA) may be computed as V diag( exp( uudiag(D)),0) inv(V).

If we want the action of expm(uu*A) on a vector "w" (say w=(s+bc)), then we can compute it as

% precomputed [V,D] = eig(A); lambda = diag(D); inv_V = inv(V); % note inv_V may be V' for symmetric matrix

then expm( uuA) w can be computed as

V ( exp(uu lambda) . (inv_Vw) )

in O(N^2) work.

Perhaps something similar if (ALHS\A) is unchanged as well?