XanaduAI / strawberryfields

Strawberry Fields is a full-stack Python library for designing, simulating, and optimizing continuous variable (CV) quantum optical circuits.
https://strawberryfields.ai
Apache License 2.0
745 stars 187 forks source link

Issue with implementing the state preparation code with LossChannel #608

Closed spykspeigel closed 2 years ago

spykspeigel commented 3 years ago

Adding a Losschaneel to each qumode leads to a mixed state and the continuous quantum neural network example is not working as expected. Could you suggest to me how to deal with adding LossChannel? I wish to replicate Fig. 6 in arxiv.org/1806.06871 which plots the Mean squared error versus Loss coefficient.

trbromley commented 3 years ago

Hey @spykspeigel! Please could you provide a minimum code example and the error you obtain?

In case this helps, here is a snippet of working code that includes the LossChannel:

import strawberryfields as sf
from strawberryfields import ops

prog = sf.Program(3)

with prog.context as q:
    ops.Sgate(0.54) | q[0]
    ops.Sgate(0.54) | q[1]
    ops.Sgate(0.54) | q[2]
    ops.BSgate(0.43, 0.1) | (q[0], q[2])
    ops.BSgate(0.43, 0.1) | (q[1], q[2])
    ops.LossChannel(0.5) | q[0]
    ops.LossChannel(0.6) | q[1]

eng = sf.Engine("fock", backend_options={"cutoff_dim": 10})
result = eng.run(prog)
state = result.state

rho = state.dm()
rho

(adapted from the code in the docs)

Also note that arxiv.org/1806.06871 has a repo with supporting code: https://github.com/XanaduAI/quantum-neural-networks. However, the code in that repo uses an older version of Strawberry Fields.

spykspeigel commented 3 years ago

Thanks, @trbromley for your response. The repo you shared doesn't implement a neural network with LossChannel. I suppose its implementation is easy. But I am getting errors when including the LossChannel to each qumode at the end of each layer of the network.

Each layer has the following structure:

    interferometer(int1, q)

    for i in range(N):
        ops.Sgate( ) | q[i]
        ops.LossChannel(0.3) | q[i]
    interferometer( , q)

    for i in range(N):
        ops.Dgate( , ) | q[i]
        ops.Kgate( ) | q[i]

The following line produces error: difference = tf.reduce_sum(safe_abs(ket - target_state))

ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.

How would you suggest me to add the LossChannel to each layer in this case?

trbromley commented 3 years ago

Hey @spykspeigel! Thanks for sharing these details. It looks like you have added the LossChannel correctly but the postprocessing in the difference = tf.reduce_sum(safe_abs(ket - target_state)) line needs to be updated. When you apply LossChannel it causes the state to become mixed and hence state.ket() = None. One option is to access the density matrix through state.dm(). However, if you check out the function fitting example, you can see that we do not need the state itself but the mean value of x. This is done via the line mean_x, svd_x = state.quad_expectation(0) (see around line 169).

sduquemesa commented 2 years ago

Hi @spykspeigel! I hope your conversation with @trbromley has helped you solving your issues 🤓 As this conversation has not been active for a while, I am closing this issue.