Open oveddan opened 7 years ago
The class slides use soft constraints to specify the vectors for "constrained faces," whereas we want you to use hard constraints for the homework. In other words, the class slides add a term to penalize the violation of the constraints (with weight lambda), whereas the homework asks you to minimize the smoothness energy (||Lu||^2 in the slides) over the space of vector fields that exactly satisfy the constraints.
The homework's Q matrix is the same as A*A from the slides if you were to drop the slides' penalty term (i.e., set lambda = 0, or impose no constraints).
You can implement hard constraints either with Lagrange multipliers or by explicitly fixing the constrained variables. This second approach involves removing rows/columns of the Q matrix corresponding to the constrained faces and building an appropriate right-hand side vector. The second approach is preferable since it removes variables instead of adding them, and--most importantly--it yields a positive definite matrix permitting an efficient Cholesky factorization instead of the indefinite matrix you'd get using Lagrange multipliers.
Does this mean we have to replace SimplicialLDLT
solver with something like SparseLU
that doesn't require a SPD matrix if we use the Lagrange multiplier method? I have tried both and didn't notice any difference in result though.
You can compute an LDL^T decomposition for an indefinite matrix; I'm not sure why the Eigen::SimplicialLDLT documentation implies otherwise. However, in my experience with SuiteSparse/CHOLMOD (the high-performance factorization libraries used also by Matlab), this has little benefit over ordinary LU: it is still much slower and more memory-hungry than applying LL^T to the reduced, unconstrained matrix you'd get with the "second approach" (e.g. 5-10x).
I'd expect an actual Cholesky factorization, e.g. Eigen's SimplicialLLT, to fail on a Lagrange multiplier system. I'm not sure Eigen's built-in LL^T implementation will have noticeably better performance than LDL^T, but the CHOLMOD version should.
I remember the professor mentioning something in class, that how we build the matrix for the homework would slightly different than how it was defined in the class slides for directional fields.
Is there no concept of lambda for the homework, and are all values of
b
0?