tensorflow / quantum

Hybrid Quantum-Classical Machine Learning in TensorFlow
https://www.tensorflow.org/quantum
Apache License 2.0
1.8k stars 577 forks source link

Optimising functions with Openfermion #338

Open zohimchandani opened 4 years ago

zohimchandani commented 4 years ago

I am working on some quantum generative adversarial networks code and have a discriminator which is a function of some parameters multiplied by some expectation value measurements. For example (see below the code) the variable parameters which I would like to optimise given a loss function are the disc_weights which are multiplied by openfermion commands which measure the X and Y expectation values on qubit 0 and 1 respectively.

psi = (disc_weights[0] * QubitOperator('X0') + disc_weights[1] * QubitOperator('Y0') )

When I use a keras optimiser, it recognises disc_weights as a tf.variable but clearly has trouble with the QubitOperator commands. How do I exclude these from the optimisation routine?

MichaelBroughton commented 4 years ago

Thanks, for raising this issue. Would you mind providing a code snippet to show what error or problem is happening ?

zohimchandani commented 4 years ago

I have some qGANs code where the discriminator performs a bunch of Pauli expectation values from which a loss function is created.

The discriminator looks something like this:

def discriminator(disc_weights): 

    psi= (disc_weights[0] * QubitOperator('X0') + 
           disc_weights[1] * QubitOperator('Y0') +
           disc_weights[2] * QubitOperator('Z0') +
           disc_weights[3] * QubitOperator('X1') +
           disc_weights[4] * QubitOperator('Y1') +
           disc_weights[5] * QubitOperator('Z1') + 
           disc_weights[6] * QubitOperator('')   + 
           disc_weights[7] * QubitOperator('')   ) 

    psi_exp = backend.get_operator_expectation_value(real_circ, QubitPauliOperator.from_OpenFermion(psi))

    return psi_exp

I then define the loss function for the discriminator which is:

def disc_loss(disc_weights): psi_exp= discriminator(disc_weights) loss = psi_exp return loss

and initialise the optimiser opt = tf.keras.optimizers.SGD(0.4)

I also convert the parameter I want to tune, disc_weights into a tf.variable : disc_weights = tf.Variable(disc_weights)

I now want to tune the disc_weights parameters to minimise the loss: `cost = lambda: disc_loss(disc_weights)

for step in range(50): opt.minimize(cost, disc_weights)`

When I run this last section of code, I encounter a large error message which ends with: ValueError: Attempt to convert a value (1.0 [X0]) with an unsupported type (<class 'openfermion.ops._qubit_operator.QubitOperator'>) to a Tensor.

I am guessing, the optimiser is having difficulty recognising the QubitOperator commands as a tf.variable.

Please let me know if this information is sufficient to trace the problem otherwise I can share more of my code with you.

MichaelBroughton commented 4 years ago

Ok, I think this enough information to get a handle on what is going wrong.

When you write an expression like this:

def discriminator(disc_weights): 

    psi= (disc_weights[0] * QubitOperator('X0') + 
           disc_weights[1] * QubitOperator('Y0') +
           disc_weights[2] * QubitOperator('Z0') +
           disc_weights[3] * QubitOperator('X1') +
           disc_weights[4] * QubitOperator('Y1') +
           disc_weights[5] * QubitOperator('Z1') + 
           disc_weights[6] * QubitOperator('')   + 
           disc_weights[7] * QubitOperator('')   ) 

    psi_exp = backend.get_operator_expectation_value(real_circ, QubitPauliOperator.from_OpenFermion(psi))

    return psi_exp

psi_exp will not be differentiable and neither will your discriminator function. If you want differentiability through this function, you will need to change the code to use the appropriate TFQ primitives.

Looking at your code it appears as though you want to have the disc_weights parameters control the coefficients beside each QubitOperator in your overall sum of QubitOperators. If this is the case, you might want to consider something a little closer to this:

def disciminator(disc_weights):

    individual_terms = tfq.layers.Expectation()(
        some_upstream_circuit_tensor,
        operators=[cirq.X(q[0]), cirq.Y(q[0]), cirq.Z(q[0]), cirq.X(q[1]), ...])

    # individual_terms will now contain a tensor of shape [1, <num ops>]
    # assuming disc_weights is shape [<num ops>]
    output = tf.einsum('i,ij->', disc_weights, individual_terms)

    # output is now rank-0 scalar containing expectation value
    # where each term in operators has been weighted by disc_weights using
    # tf ops.
    return output

Cirq and OpenFermion on their own do not support TF integration so when you want to make use of TF optimizers and TF variables, you must make sure that you use TFQ features that are differentiable. I haven't looked too closely, but there is also a PR open here: https://github.com/tensorflow/quantum/pull/339 where someone has aslo implemented some kind of quantum GAN which might also be helpful.

lockwo commented 2 years ago

Any updates on this issue?