tensorflow / quantum

Hybrid Quantum-Classical Machine Learning in TensorFlow
https://www.tensorflow.org/quantum
Apache License 2.0
1.78k stars 571 forks source link

gradient registry has no entry for: TfqSimulateState #298

Closed jfold closed 4 years ago

jfold commented 4 years ago

Hi! I am trying to calculate and optimize a loss with TFQ, which corresponds to the negative log likelihood over some datapoints. The idea (https://www.nature.com/articles/s41534-019-0157-8.pdf) is to have a parameterized circuit producing a state, which via the Born rule gives a probability distribution that should approximate some data distribution.

I use !pip install tensorflow==2.1.0 and !pip install -U tensorflow-quantum in Colab. Here is the code that calculates the loss and gradients, respectively:

def get_loss(self,X):
      output  = tfq.get_state_op()(self.circuit_tensor, self.symbols, self.thetas_tf)[0] # get wavefunction
      output  = tf.expand_dims(tf.multiply(tf.math.conj(output),output),axis=1) # convert to probabilities
      output  = tf.linalg.matmul(X,output) # calculate inner product between data and probabilities
      output  = -tf.reduce_mean(tf.math.log(output)) # calculate loss across samples 
      output  = tf.math.real([[output]])
      return output
def get_gradients(self):
    with tf.GradientTape() as g:
        g.watch(self.thetas_tf)
        outputs = self.get_loss(X_)
    gradients = g.gradient(outputs, self.thetas_tf)
    return gradients

and get "LookupError: gradient registry has no entry for: TfqSimulateState" when the line "gradients = g.gradient(outputs, self.thetas_tf)" is executed. I am not quite sure why. Can it be solved?

I apologize if something is unclear or I did not include enough code. Thanks in advance!

MichaelBroughton commented 4 years ago

Thanks for the issue. Looks like the problem is that whenever you are differentiating this get_loss function TensorFlow is attempting to differentiate through a full state vector computation. This is something that is not currently supported in TFQ, we need people to make use of expectation or sampled_expectation if they want gradients to be able to flow backwards through those ops. Using these has the perk of being something you can do on a real device too, whereas computing and differentiating state vector entries does become extremely costly past a few qubits.

That being said, if switching to expectation or sampled_expectation doesn't make sense for whatever reason, you can make use of @tf.custom_gradient to define you own gradient for a particular op or function. You could define a custom gradient for get_loss or the function returned by tfq.get_state_op() and that should also solve your problem.

jfold commented 4 years ago

Thanks for this super quick answer, Michael! I will probably need to use @tf.custom_gradient.