MPF-Optimization-Laboratory / ActiveSetPursuit.jl

Sparse least-squares solver
MIT License
0 stars 2 forks source link

Adding and dropping columns to active-set matrix `S` inefficient #12

Open mpf opened 3 months ago

mpf commented 3 months ago

As discussed, adding and dropping columns from the active-set matrix S is especially inefficient. Because S is only used through products with vectors, it seems plausible that we could replace the explicit matrix S with an implicit linear map.

Roughly, if active is the vector of indices pointing to active variables, the linear map would compute the product S*x via the method

    # product S*x
    function Sprod(A, x, active)
        xx = zeros(size(A,2))
        xx[active] .= x
        return A*xx
    end

(and a corresponding method for the product S'*y).

So instead of copying and deleting individual columns of A into S, we would just keep track of the active indices and use the linear map Sprod to compute the products. The tradeoff is additional products with A. (Currently, there is only one product with A at each iteration used to pull a column from A into S, ie, this line.)

To do:

cortner commented 3 months ago

Am I understanding correctly that if A is m x n then S will be m x #active?

If this is the case, then for m >> n one could preprocess by replacing ||A x - b||^2 with ||R x - Q^T b||^2. This has a large initial cost (full QR factorization) but in practice this is often negligable compared to the everything else we are doing.

mpf commented 3 months ago

Am I understanding correctly that if A is m x n then S will be m x #active?

Indeed.

If this is the case, then for m >> n one could preprocess by replacing ||A x - b||^2 with ||R x - Q^T b||^2. This has a large initial cost (full QR factorization) but in practice this is often negligable compared to the everything else we are doing.

But ASP never solves the full m-by-n least-squares problem ||Ax-b||^2. Instead, it solves the thin m-by-k problem defined by the current active set, ||Sx_s-b||^2 via Q-less QR, ie, ||Sx_s-b|| = ||Rx_s Q^Tb|| where S'S=R'R (using the corrected semi-normal equations, which should be accurate for S with conditioning up to 1e12.

cortner commented 3 months ago

I am missing something. According to your first paragraph you are saying that S is m x #active. but in your second paragraph S is #active x #active? (with k = #active ?)

mpf commented 3 months ago

There's a typo in my earlier response -- sorry, that probably led to the confusion.

At each iteration that is not stationary with respect to the current active set of $k$ variables, ASP pulls a new column $a$ from the full matrix $A$, adds it to the active set, and updates a Q-less QR factorization:

  1. identify new column $a$
  2. update the current $k\times k$ triangular factor $R$ to the $(k+1)\times(k+1)$ triangular $\bar R$, and update the $m\times(k+1)$ active-set matrix $\bar S$ so that

$$\bar R^T\bar R = \bar S^T\bar S, \quad \bar S:=\begin{bmatrix} S & a \end{bmatrix}.$$

  1. solve the $m\times (k+1)$ least-squares problem $$||\bar Sx_{\bar S}-b||_2$$ via the seminormal equations $$\bar R^T\bar R x_s = \bar S^Tb$$

Steps 2-3 are carried out in a numerically stable way by QRupdate.jl. Note that QRupdate doesn't actually need the updated matrix $\bar S$. In particular, it only requires as input $R$, $S$, $a$, and $b$. My main point is that $S$ doesn't need to be an explicit matrix because QRupdate only requires the products $Sv$ and $S^Tw$ for vectors $v$ and $w$, and so $S$ can in principle be a function that computes these products.

mpf commented 3 months ago

Good news: QRupdate now has in-place versions of its add- and delete-column routines! Huge savings, particularly in allocations.

The above discussion on potential savings from implicitly maintaining $S$ still hold.

cortner commented 1 month ago

@tinatorabi -- could you please post a MWE here that reproduces this performance bottleneck? Probably a random problem of sufficient size will be enough.