Closed jfold closed 4 years ago
Thanks for the issue. Looks like the problem is that whenever you are differentiating this get_loss
function TensorFlow is attempting to differentiate through a full state vector computation. This is something that is not currently supported in TFQ, we need people to make use of expectation
or sampled_expectation
if they want gradients to be able to flow backwards through those ops. Using these has the perk of being something you can do on a real device too, whereas computing and differentiating state vector entries does become extremely costly past a few qubits.
That being said, if switching to expectation
or sampled_expectation
doesn't make sense for whatever reason, you can make use of @tf.custom_gradient
to define you own gradient for a particular op or function. You could define a custom gradient for get_loss
or the function returned by tfq.get_state_op()
and that should also solve your problem.
Thanks for this super quick answer, Michael! I will probably need to use @tf.custom_gradient
.
Hi! I am trying to calculate and optimize a loss with TFQ, which corresponds to the negative log likelihood over some datapoints. The idea (https://www.nature.com/articles/s41534-019-0157-8.pdf) is to have a parameterized circuit producing a state, which via the Born rule gives a probability distribution that should approximate some data distribution.
I use
!pip install tensorflow==2.1.0
and!pip install -U tensorflow-quantum
in Colab. Here is the code that calculates the loss and gradients, respectively:and get "LookupError: gradient registry has no entry for: TfqSimulateState" when the line "gradients = g.gradient(outputs, self.thetas_tf)" is executed. I am not quite sure why. Can it be solved?
I apologize if something is unclear or I did not include enough code. Thanks in advance!