Open JamesYang007 opened 3 months ago
ad.constraint
submodule:
ad.constraint.linear
: general linear inequality constraint Ax <= b. A
should be taken as some abstract matrix type (should be fine to use MatrixNaive...
since we should only require cmul
and ctmul
. b
is obviously just a dense vector..update()
function. It should take in c, d, alpha, Q, D, v
(first 3 scalars, Q square dense matrix (small), D diagonal vector, v vector). It just solves the prox operator using nnqp
solver and returns the block update.abs_grad
definition needs to change (invariance step) to include the dual component. Then KKT + screening steps should remain the same.TODO (ordered):
state_gaussian_pin_constraint
and solve_gaussian_pin_constraint
. Should generalize control flow of the unconstrained version and only the coordinate_descent
function should be swapped. Code should assume a constraint class exists and the API should fall nicely.constraints
as an argument to grpnet
, state
functions, and change _solve
function that delegates to the C++ solver functions to properly include the case of constraints. bcd.constrained
for unittesting purposes.Need better intuition for how the iterates are behaving with proximal Newton method. An issue seems to occur for ad.constraint.lower
when b
is positive and small (close to 0). The dual seems to move too much and enters the regime where ||v - A^T mu||_2 <= lmda. We want to confirm this behavior. Try to find a minimal problem with dimension 2 for ease of visualization.
d = 2
eps = 1e-5
seed = 6
np.random.seed(seed) quad = np.random.uniform(0, 1, d) linear = np.sqrt(quad) np.random.normal(0, 1, d) l1 = 0.5 np.linalg.norm(linear) l2 = 0 Q = np.random.normal(0, 1, (d, d)) Q, , = np.linalg.svd(Q) Q = np.asfortranarray(Q) b = np.full(d, eps) constr = ad.constraint.lower( b, dtype=np.float64, )
ConstraintLowerUpper
to ConstraintOneSided
and make _sgn
a vector of +/- 1
s. The rest of the implementation should mostly be the same?ConstraintOneSided
with ADMM. Is it sensitive to rho even with warm-starts? YES very!ConstraintBox
following the template for ConstraintOneSided
.ConstraintLinear
. Not sure what the right algorithm is here..solve()
method. Functional doesn't make much sense in this case IMO.solve()
.Optimizations:
ConstraintLinear
can use a sparse format. This will save a lot of space!clear()
to clear any cached information. The solver should do this in the beginning (do it in solve_core).solve(...)
: only return primal. Dual is saved internally.dual_nnz()
: returns number of non-zero entries of the current dual variable.dual(indices, values)
: populate the current dual variable sparse format. gradient(x, dual_indices, dual_values, out)
: compute gradient where primal and dual are given by user. This is useful when computing full gradient in diagnostic.gradient(x, out)
: compute gradient where primal is given by user and dual is assumed to be the currently saved value. This is useful in the solver when the gradient is computed on screen set first then the non-screen set.clear()
: clears internal structure to the same state as initial construction. Solver should call this in the very beginning.
At the very least, it'd be nice to support constraints of the form Ax = b and Ax <= b.