Closed eliottrosenberg closed 1 year ago
The behavior is expected because by default simulators use a low precision np.complex64
for simulations to prefer speed over accuracy. This behavior is configurable and if you set a higher precision for the simulators, the results that you get are much more accurate. For example, in your code snippet, simply replacing sim = cirq.Simulator()
with sim = cirq.Simulator(dtype=np.complex128)
results in the following output:
$> test_conservation(np.pi*0.4, np.pi*0.8, 8, np.arange(4), 4)
array([-5.55111512e-17+0.j, 1.19348975e-15+0.j, 5.57331958e-14+0.j,
-2.61934474e-10+0.j])
I'll mark this as intended behavior and close the bug. Please let me know if you have any further questions.
Description of the issue
simulate_expectation_values()
gives very wrong answers for largePauliSum
s. It seems to be a numerical precision issue.How to reproduce the issue
Consider a circuit consisting of alternating layers of fSim gates. Because fSim gates are number-conserving, this circuit conserves $\hat O = \sum_i Z_i$. In particular, if we have $N$ qubits and initialize them to a bitstring with $N/2$ 1s, we are in an eigenstate of $\hat O$ with eigenvalue 0, so, in the final state, the expectation value of all positive powers of $\hat O$ should be 0.
Consider the following code:
If you now run
test_conservation(np.pi*0.4, np.pi*0.8, 8, np.arange(4), 4)
, I find that it outputsarray([5.96046448e-08, -7.45058060e-08, -6.55651093e-06, -2.91442871e-02])
, even though these should all be 0. Clearly $10^{-2}$ is a significant error!Cirq version 1.2.0.dev20221220232900