quantumlib / Cirq

A Python framework for creating, editing, and invoking Noisy Intermediate Scale Quantum (NISQ) circuits.
Apache License 2.0
4.25k stars 1.01k forks source link

Numerical errors introduced by `simulate_expectation_values()` #6011

Closed eliottrosenberg closed 1 year ago

eliottrosenberg commented 1 year ago

Description of the issue simulate_expectation_values() gives very wrong answers for large PauliSums. It seems to be a numerical precision issue.

How to reproduce the issue

Consider a circuit consisting of alternating layers of fSim gates. Because fSim gates are number-conserving, this circuit conserves $\hat O = \sum_i Z_i$. In particular, if we have $N$ qubits and initialize them to a bitstring with $N/2$ 1s, we are in an eigenstate of $\hat O$ with eigenvalue 0, so, in the final state, the expectation value of all positive powers of $\hat O$ should be 0.

Consider the following code:

import cirq
import numpy as np

def fsim_cycle(theta, phi, n):

    qc = cirq.Circuit( cirq.FSimGate(theta, phi).on(cirq.LineQubit(q), cirq.LineQubit(q+1)) for q in range(0,n-1,2) ) +  cirq.Circuit( cirq.FSimGate(theta, phi).on(cirq.LineQubit(q), cirq.LineQubit(q+1)) for q in range(1,n-1,2) )

    return qc

def Nph_operator(n):
    O = sum( cirq.Z(cirq.LineQubit(q)) for q in range(n))
    return O

def test_conservation(theta, phi, n, locs, ncycles, max_moment=4):
    qc = cirq.Circuit( cirq.X(cirq.LineQubit(q)) for q in locs) + ncycles*fsim_cycle(theta,phi,n)
    O = Nph_operator(n)
    ops = [O**k for k in range(1,max_moment+1)]

    sim = cirq.Simulator()
    O_powers = np.array(sim.simulate_expectation_values(qc, ops)).real
    return O_powers

If you now run test_conservation(np.pi*0.4, np.pi*0.8, 8, np.arange(4), 4), I find that it outputs array([5.96046448e-08, -7.45058060e-08, -6.55651093e-06, -2.91442871e-02]), even though these should all be 0. Clearly $10^{-2}$ is a significant error!

Cirq version 1.2.0.dev20221220232900

tanujkhattar commented 1 year ago

The behavior is expected because by default simulators use a low precision np.complex64 for simulations to prefer speed over accuracy. This behavior is configurable and if you set a higher precision for the simulators, the results that you get are much more accurate. For example, in your code snippet, simply replacing sim = cirq.Simulator() with sim = cirq.Simulator(dtype=np.complex128) results in the following output:

$> test_conservation(np.pi*0.4, np.pi*0.8, 8, np.arange(4), 4)
array([-5.55111512e-17+0.j,  1.19348975e-15+0.j,  5.57331958e-14+0.j,
       -2.61934474e-10+0.j])

I'll mark this as intended behavior and close the bug. Please let me know if you have any further questions.