Closed T-DHQian closed 1 month ago
Maybe the error here at line 630: Checking whether it's true or not:
julia> using LinearAlgebra
julia> using Nemo
julia> using QuantumClifford
julia> using QuantumClifford: nqubits, stab_to_gf2, length
julia> using QuantumClifford.ECC: parity_checks
julia> s = QuantumClifford.stabilizerview(s_original);
julia> mat = matrix(GF(2), stab_to_gf2(parity_checks(s)));
julia> using LinearAlgebra
julia> computed_rank = LinearAlgebra.rank(mat)
103
julia> nqubits(s)
448
julia> d = QuantumClifford.destabilizerview(s_original);
julia> nqubits(d)
448
julia> mat = matrix(GF(2), stab_to_gf2(parity_checks(d)));
julia> computed_rank = LinearAlgebra.rank(mat)
103
julia> length(s)
103
julia> length(d)
103
Maybe the error here at line 630: Checking whether it's true or not:
julia> using LinearAlgebra julia> using Nemo julia> using QuantumClifford julia> using QuantumClifford: nqubits, stab_to_gf2, length julia> using QuantumClifford.ECC: parity_checks julia> s = QuantumClifford.stabilizerview(s_original); julia> mat = matrix(GF(2), stab_to_gf2(parity_checks(s))); julia> using LinearAlgebra julia> computed_rank = LinearAlgebra.rank(mat) 103 julia> nqubits(s) 448 julia> d = QuantumClifford.destabilizerview(s_original); julia> nqubits(d) 448 julia> mat = matrix(GF(2), stab_to_gf2(parity_checks(d))); julia> computed_rank = LinearAlgebra.rank(mat) 103 julia> length(s) 103 julia> length(d) 103
The default setting is phases=true and I purposely added this in the code, but this still doesn't work.
s_original_traced = traceout!(copy(s_original), 1:51, phases=true) # 51 is chosen arbitrarily
s_appended_traced = traceout!(copy(s_appended), 1:51, phases=true)
Further, I found this issue can be easily reproduced by a random mixed stabilizer with sufficient qubit numbers, such as:
s_original = MixedDestabilizer(random_stabilizer(400,700))
Hi, @T-DHQian , thank you for reporting this, it is much appreciated. I further simplified your example a bit more -- it is definitely a bug in the library.
s = MixedDestabilizer(random_stabilizer(400,700))
s_appended = s ⊗ MixedDestabilizer(S"Z") # Add another qubit
s_traced = traceout!(copy(s), 1:51, phases=true) # 51 is chosen arbitrarily
s_appended_traced = traceout!(copy(s_appended), 1:51, phases=true)
s_traced_appended = s_traced ⊗ MixedDestabilizer(S"Z")
canonicalize!(s_traced_appended)
canonicalize!(s_appended_traced)
stabilizerview(s_traced_appended) == stabilizerview(s_appended_traced)
Here it is yet more simplified reproducer:
s = MixedDestabilizer(random_stabilizer(10,500))
s_rref,r1 = canonicalize_rref!(copy(s), 1:2)
@assert canonicalize!(copy(stabilizerview(s_rref))) == canonicalize!(copy(stabilizerview(s)))
Happens only for pretty large number of qubits (>450)...
The bug seems to be in the SIMD code in mul_ordered!
-- switching to a non-SIMD version of that code fixes this issue (and makes the code much slower).
The final reproducer:
n=8 # works fine for n<8
a = rand(UInt64, n)
b = rand(UInt64, n)
c1,c2 = QuantumClifford.mul_ordered!(copy(a),copy(b))
n1,n2 = QuantumClifford._mul_ordered_nonvec!(copy(a),copy(b))
@assert c1%4==n1%4
@assert c2%2==n2%2
The bug seems to be in the SIMD code in
mul_ordered!
-- switching to a non-SIMD version of that code fixes this issue (and makes the code much slower).
Thanks for your reply! Could you please explain a bit in detail on how to switch to a non-SIMD version? Should I just use _mul_ordered_nonvec! instead of mul_ordered! everywhere for now to get the correct answer? Meanwhile, does this issue have a prevalent impact on other operations since it is a basic operations appearing in many places, such as canonicalize!?
Indeed, this issue seems to affect most phase calculations for expressions with roughly more than 500 qubits. It is rather disturbing that the test suite has not detected it (we have plenty of tests that run with large number of qubits). I will investigate when this started and how it was not detected after it is fixed.
There is no straightforward way to switch to _mul_ordered_nonvec!
-- the only reason it exists is to be used in tests and consistency checks (to detect automatically issues like the one you are facing). I will try to find the root cause today and upload a fix though.
I'm not sure if the problem is possbly due to that SIMD may alter the order of computation, which may have detrimental impact on here:
cnt2 ⊻= (newx1 ⊻ newz1 ⊻ x1z2) & anti_comm
cnt1 ⊻= anti_comm
If I replace it to be something like:
if packs>0
lane = SIMD.VecRange{veclen}(0)
@inbounds for i in 1:veclen:(len-veclen+1)
x1::VT, x2::VT, z1::VT, z2::VT = l[i+lane], r[i+lane], l[i+len+lane], r[i+len+lane]
r[i+lane] = newx1 = x1 ⊻ x2
r[i+len+lane] = newz1 = z1 ⊻ z2
x1z2 = x1 & z2
anti_comm = (x2 & z1) ⊻ x1z2
# Process each element in the vector to maintain order sensitivity
for j in 1:veclen
element_anti_comm = anti_comm[j]
rcnt2 ⊻= (rcnt1 ⊻ newx1[j] ⊻ newz1[j] ⊻ x1z2[j]) & element_anti_comm
rcnt1 ⊻= element_anti_comm
end
end
end
the problem seems to be disappeared. However, the performance seems to be even worse than _mul_ordered_nonvec!
...
@T-DHQian , there is a really neat way to prove that the order does not matter without even touching the formulas: Imagine you have two large Pauli strings A and B and you calculate prodphase(A,B)
where prodphase
is the phase of the product. By the algebraic properties of the tensor product, the result should be the same if you calculate prodphase(A[first_half],B[first_half]) + prodphase(A[second_half],B[second_half])
and given that +
is commutative, then the formula we are using can not depend on the order of the loop. Above first_half = 1:len(A)/2
and second_half
is the rest.
The bug was due to a missing term in the formula as you can see in the linked PR. This is a monumentally stupid mistake and I am very surprised it was not caught earlier. It is by far the worst bug we have ever had. Thank you for reporting it, it is greatly appreciated.
I think this is now fixed in #322.
@T-DHQian , do you mind checking whether it works as expected for you. Please also do not hesitate to submit pull requests with additional tests -- those are always appreciated.
A new version will be publically released shortly.
Thank you so much! I believe the bug has now been fixed. I noticed the missing term and my previous testing might have been somewhat problematic, leading me to think that simply adding the missing term wasn't sufficient. This then made me consider whether the issue might be related to the order of computation. Your explanation is very clear and helpful. I will do more tests tomorrow, and I'll be sure to let you know if I encounter any other issues.
Describe the bug 🐞
The traceout! function in QuantumClifford is producing unexpected results when applied to certain mixed stabilizer states. Specifically, the function's behavior doesn't align with theoretical expectations when comparing operations that should commute.
Expected behavior
Given a mixed stabilizer state, tracing out some qubits at the front should produce a reduced state on the remaining qubits. Appending a single qubit in the |Z⟩ state at the end should not affect the reduced state of the original qubits, only adding a Z stabilizer on the new qubit.
In other words, these two operations should commute:
Operation A: Trace out front qubits, then append a |Z⟩ qubit Operation B: Append a |Z⟩ qubit, then trace out front qubits
The resulting states from both operations should be identical.
Contrary to expectations, I've found that for some mixed stabilizer states, Operations A and B yield different results. This suggests that the traceout! function may not be correctly handling all cases.
Minimal Reproducible Example 👇
The output looks like:
Further examination on, for example, the second stabilzier:
One can see that the stabilizers differ in the sign, which makes them in two different orthogonal subspaces.
The state on which this issue occurs is quite tricky to get, and a certian one is in data.zip. It seems that this issue would only occur for larger system size.