Note: lambda vector's dimension corresponds to number of equation constraints in ADMM. Before today's skype meeting, we only have (K D = F) as equation constraints and thus have 6 * Ns_ entries.
void ADMMCut::SetStartingPoint(){
Ns_ = ptr_dualgraph_->SizeOfFreeFace(); // the number of vertices in original graph
/* --- */
lambda_.resize(6 * Ns_);
}
But after the meeting on May 7th, the equality of constraints about y come into play:
y_ij = x_i - xj; (There are *2 \ Md** of y_ij in total)
Thus for now, the dimension of Lambda is 2 * Md + 6 * Ns.
Modification of CalculateX and CalculateD in this new version.
Some Clarification for ADMM computation
Note:
The ADMM process alternatively minimize one same objective function (in CalculateX, CalcualteD)
in previous version:
that function is:
x^t * (A^t C A) * x + lambda^t * (K(x)d - F(x)) + (mu/2) <K(x)d - F(x), K(x)d - F(x)>
this is function is obtained by reflecting the equation constraints to the objective function, the
original objective function is:
sum_ w_ij |x_i - x_j|
s.t. K(x) D = F(x)
...
But in current version:
the function turns to:
sum_ w_ij (x_i - xj)+
s.t. K(x) D = F(x)
...
(we use iterative reweighting scheme to translate this discrete problem into continuous quadratic problem, and then use ADMM to solve the continuous problem, then we get the function:
x^t * (A^t C A) * x + lambda^t * (K(x)d - F(x)) + (mu/2) <K(x)d - F(x), K(x)d - F(x)> above.)
To solve this new problem, we introduce variable y, and the objective function turns to:
min_{x,y,d} sum_ij w_ij (yij)+
s.t. y_ij = x_i - x_j
K(x) d = F(x)
....
We still use reweighting scheme to solve this discrete problem, formulate it into:
min{x,y,d} sum (1/r_ij(iter)) wij^2 * ((yij)+)^2
s.t. ....
and then the augmented Lagrangian function is:
sum_ (1/r_ij(iter)) wij^2 * ((yij)+)^2
...+ lambda_{1,..., 6Nd}^t * (K(x)D - F(x))
...+ sum_{6Nd,..., 6 * Nd + 2*Md -1} lambda{k} * (yij - (xi - xj))
...+ (mu/2) * <K(x)D - F, K(x)D - F>
...+ (mu/2) * sum (y_ij - (xi - xj))^2
This formula is obviously (x,D) - Y seperable.
CalculateX
In previous version,
CalculateX minimize:
x^t * (A^t C A) * x + lambda^t * (K(x)d - F(x)) + (mu/2) <K(x)d - F(x), K(x)d - F(x)> + (ignore y related terms because they are constant in this process!)
after calculation,
we can formulate K(x) d - F(x) = Q(d) * x (assume d is a constant vector in calculate X process)
and thus we get:
(1/2) x^t * (Q^t Q + A^t C A) * x + lambda^t * Q * x
void ADMMCut::CalculateX{
// Construct Hessian Matrix for D-Qp problem
SpMat Q;
CalculateQ(D_, Q);
SpMat H2 = Q.transpose() * Q;
SpMat H = 2 * L_ + penalty_ * H2;
// Construct Linear coefficient for x-Qp problem
a_ = Q.transpose() * lambda_;
ptr_qp_->solve(H, a_, A, b, W_, d_, lb, ub, x_, NULL, NULL, debug_);
}
In new version,
we minimize
(1/2) x^t * (Q^t Q) * x + lambda^t * Q * x, in CalculateX
(without AtCA weighting matrix)
CalculateD
Nothing should be changed.
CalculateY
Dual residual and primal residual for ADMM termination criterion
lambda vector expand dimension
Note: lambda vector's dimension corresponds to number of equation constraints in ADMM. Before today's skype meeting, we only have (K D = F) as equation constraints and thus have 6 * Ns_ entries.
But after the meeting on May 7th, the equality of constraints about y come into play: y_ij = x_i - xj; (There are *2 \ Md** of y_ij in total)
Thus for now, the dimension of Lambda is 2 * Md + 6 * Ns.
Modification of CalculateX and CalculateD in this new version.
Some Clarification for ADMM computation
Note: The ADMM process alternatively minimize one same objective function (in CalculateX, CalcualteD) in previous version: that function is: x^t * (A^t C A) * x + lambda^t * (K(x)d - F(x)) + (mu/2) <K(x)d - F(x), K(x)d - F(x)>
this is function is obtained by reflecting the equation constraints to the objective function, the original objective function is: sum_ w_ij |x_i - x_j| s.t. K(x) D = F(x) ...
But in current version: the function turns to: sum_ w_ij (x_i - xj)+ s.t. K(x) D = F(x) ... (we use iterative reweighting scheme to translate this discrete problem into continuous quadratic problem, and then use ADMM to solve the continuous problem, then we get the function: x^t * (A^t C A) * x + lambda^t * (K(x)d - F(x)) + (mu/2) <K(x)d - F(x), K(x)d - F(x)> above.)
To solve this new problem, we introduce variable y, and the objective function turns to: min_{x,y,d} sum_ij w_ij (yij)+ s.t. y_ij = x_i - x_j K(x) d = F(x) ....
We still use reweighting scheme to solve this discrete problem, formulate it into: min{x,y,d} sum (1/r_ij(iter)) wij^2 * ((yij)+)^2 s.t. ....
and then the augmented Lagrangian function is: sum_ (1/r_ij(iter)) wij^2 * ((yij)+)^2 ...+ lambda_{1,..., 6Nd}^t * (K(x)D - F(x)) ...+ sum_{6Nd,..., 6 * Nd + 2*Md -1} lambda{k} * (yij - (xi - xj)) ...+ (mu/2) * <K(x)D - F, K(x)D - F> ...+ (mu/2) * sum (y_ij - (xi - xj))^2
This formula is obviously (x,D) - Y seperable.
CalculateX
In previous version, CalculateX minimize: x^t * (A^t C A) * x + lambda^t * (K(x)d - F(x)) + (mu/2) <K(x)d - F(x), K(x)d - F(x)> + (ignore y related terms because they are constant in this process!) after calculation, we can formulate K(x) d - F(x) = Q(d) * x (assume d is a constant vector in calculate X process) and thus we get: (1/2) x^t * (Q^t Q + A^t C A) * x + lambda^t * Q * x
In new version, we minimize (1/2) x^t * (Q^t Q) * x + lambda^t * Q * x, in CalculateX (without AtCA weighting matrix)
CalculateD
Nothing should be changed.
CalculateY
Dual residual and primal residual for ADMM termination criterion
should be recalculated.