Open matteoacrossi opened 3 years ago
That particular bit of code changed (albeit only slightly) in #6896, but I just tested it, and it produces the same effect on main
.
Changing the tolerances is pretty difficult to implement correctly, because the floating-point operations being considered in the two cases are quite drastically different, and unfortunately the matrix being unitary isn't even an absolute guarantee that this code will succeed. I'm not sure what's meant by "the closest unitary", though - the concept of distance is pretty hard to nail down.
In this case, the matrix actually relatively far from unitary - in this case I find:
In [1]: import numpy as np
...: np.set_printoptions(linewidth=np.inf)
In [2]: U = np.array([[ 7.06170298e-01+0.00000000e+00j, 1.32212929e-01+6.29474509e-02j,
...: 6.92734343e-01+0.00000000e+00j, -2.82359525e-15-4.90767745e-15j],
...: [ 4.07469350e-02+0.00000000e+00j, 5.16761273e-01-5.62190168e-01j,
...: -8.90793989e-02+1.54254824e-01j, 6.19281841e-01+0.00000000e+00j],
...: [-7.06868520e-01+0.00000000e+00j, 1.61702735e-01+3.06787860e-02j,
...: 6.86928856e-01+8.83837054e-03j, 3.60355709e-02-1.91994210e-05j],
...: [ 3.04662537e-04+0.00000000e+00j, -3.89640632e-01+4.65347802e-01j,
...: 3.17696502e-02-1.24220607e-01j, 7.83075412e-01-4.45458978e-02j]])
...: U @ U.conj().T
Out[2]:
array([[ 1.00000000e+00-7.35425978e-19j, -1.39835678e-10-5.32752620e-10j, -6.92500259e-11+7.36638354e-12j, 9.68643395e-11+1.19605201e-10j],
[-1.39835678e-10+5.32752620e-10j, 1.00000000e+00+8.53060339e-19j, 9.38072536e-11+1.07378981e-10j, -3.52907044e-10+2.10605527e-10j],
[-6.92500259e-11-7.36638268e-12j, 9.38072536e-11-1.07378981e-10j, 1.00000000e+00-4.19915915e-19j, -8.63739358e-11-2.13175519e-10j],
[ 9.68643395e-11-1.19605201e-10j, -3.52907044e-10-2.10605544e-10j, -8.63739358e-11+2.13175519e-10j, 1.00000000e+00-1.52318951e-17j]])
In [3]: np.abs(_)
Out[3]:
array([[1.00000000e+00, 5.50798848e-10, 6.96407186e-11, 1.53909403e-10],
[5.50798848e-10, 1.00000000e+00, 1.42583471e-10, 4.10972103e-10],
[6.96407185e-11, 1.42583471e-10, 1.00000000e+00, 2.30009258e-10],
[1.53909403e-10, 4.10972112e-10, 2.30009258e-10, 1.00000000e+00]])
While it's not a perfect fix, as a temporary measure, are you able to use a more precise form of U
? Right now, the magnitudes of the errors makes it seem like immediate problem may be coming from only having the elements of U
specified to 9 significant figures.
The error should be more descriptive, though. Also see #4159.
One way to get a nearby unitary might be something like:
T, Z = scipy.linalg.schur(U, output='complex')
U = Z @ np.diag(np.diag(T)/abs(np.diag(T))) @ Z.T.conj()
For the given example U
the update is of the order 3.E-10
:
In [136]: np.abs(U - Z @ np.diag(np.diag(T)/abs(np.diag(T))) @ Z.T.conj())
Out[136]:
array([[3.98988636e-11, 1.07464975e-10, 1.12342394e-10, 1.24830829e-10],
[3.07231030e-10, 1.04934953e-10, 3.28459841e-10, 3.40176682e-10],
[7.75431909e-11, 9.01091112e-11, 1.74218352e-10, 2.08428305e-10],
[1.80731083e-10, 1.70747937e-10, 1.62545168e-10, 2.41630737e-10]])
I think (but I'm not schur) that changing the exception to a warning and normalizing D = D/abs(D)
in a similar way after line 177 https://github.com/Qiskit/qiskit-terra/blob/06795bfa88f5bcb2c89d9d7a7621daa115a5cb18/qiskit/quantum_info/synthesis/two_qubit_decompose.py#L177 would similarly give the Weyl decomposition of a nearby unitary.
If so, the loop termination condition would need some thought, or we could dust off your slightly slower randomization-free diagonalization routine @jakelishman
The way I constructed the closest unitary is this
def closest_unitary(A):
""" Calculate the unitary matrix U that is closest with respect to the
operator norm distance to the general matrix A.
Return U as a numpy matrix.
"""
V, __, Wh = scipy.linalg.svd(A)
U = np.matrix(V.dot(Wh))
return U
taken from here. Also in this case the update is of order 1E-10
.
I was trying to get the problematic unitary (or not-so-unitary) in our algorithm with higher precision (the fact that there was a 1e-9 precision comes from printing), but it is non-deterministic and happens quite rarely.
This is another unitary (printed with higher precision) for which I get the error:
U = np.array([[ 6.6280134117527545e-01+0.0000000000000000e+00j,
7.9070744081063765e-02+5.9295024503109660e-02j,
7.4224409707043104e-01+0.0000000000000000e+00j,
2.2472032857784679e-17+2.6966439429341613e-16j],
[-2.0415298103430044e-02+0.0000000000000000e+00j,
6.7998040221721590e-01-5.5649238375771592e-01j,
-9.7515627248098199e-03+1.1360376162381329e-01j,
4.6316863817927462e-01+0.0000000000000000e+00j],
[-7.4851689613752492e-01+0.0000000000000000e+00j,
5.1380709434665502e-02+6.7793907449903754e-02j,
6.5751341404899266e-01-3.1174303580772732e-03j,
-1.2363407817019017e-02-5.5929071146258019e-05j],
[ 2.3224179210950784e-04+0.0000000000000000e+00j,
-2.8790451981232074e-01+3.5784045774914769e-01j,
1.8763850458787680e-03-6.1120077603064406e-02j,
8.6765682538002109e-01-1.8025978197509515e-01j]])
For which np.max(np.abs(U.dot(U.conj().T) - np.eye(4))) = 5.383681304527641e-13
. Again, using closest_unitary
defined above the error does not happen.
The closest unitary function you've supplied will always succeed, because it creates a 2q matrix where U @ U.conj().T
is the identity up to a max difference of about numerical tolerance, regardless of how close the input matrix is. If we're validating our input looks about unitary, then that isn't too much of a problem, but it still would be good to avoid an extra singular-value decomposition if at all possible.
I'm also not Schur (🙄) about whether just renormalising D
in our approach would really solve the problem - it will ensure that the output matrix is unitary, but it still won't reliably pass the current np.isclose
check. That check is because the random algorithm sometimes fails by a long way, and we need some way to tell if we're in those cases, so we can repeat. In that sense, we could just up the tolerance, or I could stick in the randomisation-free routine - that's a little slower (if I remember correctly), but if the input was a valid complex-symmetric (or close to one), it won't ever fail to diagonalise it into the correct form, so the isclose
test becomes unnecessary.
Regardless of everything else, I think a closest_unitary()
would be a useful addition to quantum_info
. For this bug report, I guess it's an empirical question whether the best (in terms of speed and/or precision) approach is to start with applying closest_unitary()
followed by the existing randomized diagonalization; or instead renormalizing D
after a randomization-free approach using the as-supplied matrix (thereby skipping isclose()
test). Is there a branch with your routine somewhere for checking?
Yeah, agreed that a closest_unitary
function is useful. Is this the only meaningful definition of operator distance that we want to consider adding as well? I don't have another in mind, but just throwing it out there.
Because I am a fool, I had apparently deleted my old branch, but happily I was able to retrieve enough of the blobs from the depths of .git
to reconstruct what I had on top of #6896. I've put it up a branch (here's the comparison: https://github.com/Qiskit/qiskit-terra/compare/main...jakelishman:deterministic-weyl-decomposition).
qiskit.exceptions.QiskitError: 'TwoQubitWeylDecomposition: failed to diagonalize M2. Please report this at https://github.com/Qiskit/qiskit-terra/issues/4159. Input: [[(0.4721638292945223+0.3042909989657806j), (-0.33229398169630664+0.27211948842897743j), (0.1011095965682541+0.4174268268048286j), (-0.06944655896172822+0.5574126561120751j)], [(-0.23379427491581675-0.36028952700716876j), (-0.542341468980306+0.14627858616257305j), (-0.10336810914783047+0.5521292672524615j), (0.22327151945422896-0.3669034407783363j)], [(0.1596861384428014+0.6044514527183461j), (0.330358794375403-0.001358482454473485j), (-0.1420309332827734+0.2982716756356982j), (0.47541894369461296-0.4060025239829998j)], [(-0.31512319119507964+0.09917738082237387j), (-0.03170892280287844-0.6243843093152571j), (0.5763757620358801+0.2421821678856728j), (0.22601598119007618+0.24094720322606808j)]]'
Information
What is the current behavior?
When transpiling a certain two-qubit unitary
U
, for whichis_unitary_matrix(U)
returnsTrue
the following error is given:Steps to reproduce the problem
What is the expected behavior?
Either an error is thrown about
U
not being unitary whenqc.unitary
is called, or the decomposition should not error.Suggested solutions
The tolerances in
TwoQubitWeylDecomposition
should be adjusted to match the default tolerances inis_unitary_matrix
or viceversa. Another possibility is (provided thatU
is close enough to be unitary) to find the closest unitary toU
, so that the decomposition goes through.