Open allenZhangPersonal opened 2 years ago
The approach you described is fine and equivalent to what is described in the slides as far as I can tell from your description. It is just directly done as you assemble the matrix instead than in a postprocessing.
It is fine if the error in the hard constraints is due to the conversion (it will be very small, probably in the order of 1e-12 or so).
Good evening, I have a question about the hard constraint.
I wanted to approach the problem in a lazy fashion such that I did not need to remove the constrained x values or the "known" variables from my LHS. My approach was that whenever we encounter a face that is constrained, we do not add its e_f or e_g to the Q matrix. Instead, we bring e_f multiplied by its constrained vector to the RHS.
Therefore, after completing the Q matrix, I added extra rows for my Q matrix such that each "known" variable has its data essentially as 1 and its RHS as its vector value in complex form.
However, by doing so, our solved x values when converted back to vector form deviates from the values provided from the constraints because I think converting the constraint values into complex form and converting it back induces numerical rounding errors. Is this an okay approach or should I reorder the x values such that free variables to be solved are in the upper column whereas the known variables are just completely excluded from the column vector?