PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.38k stars 607 forks source link

Provide quantum-aware optimizers in PyTorch and TensorFlow interfaces #846

Open trbromley opened 4 years ago

trbromley commented 4 years ago

PennyLane currently supports quantum-aware optimizers including QNGOptimizer and RotosolveOptimizer. However, use of these optimizers is restricted to the NumPy interface and is not available when using the PyTorch or TensorFlow interfaces.

This issue requests that quantum-aware optimizers be made available in the PyTorch and TensorFlow interfaces. For example, currently a user could do the following in the NumPy interface:

import pennylane as qml
from pennylane import numpy as np

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev)
def f(weights):
    qml.templates.StronglyEntanglingLayers(weights, wires=range(2))
    return qml.expval(qml.PauliZ(0))

opt = qml.RotosolveOptimizer()
weights = qml.init.strong_ent_layers_uniform(4, 2)

weights = opt.step(f, weights)

It would be nice to do the following in the TF interface (not currently available):

import pennylane as qml
import tensorflow as tf
import numpy as np

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev, interface="tf")
def f(weights):
    qml.templates.StronglyEntanglingLayers(weights, wires=range(2))
    return qml.expval(qml.PauliZ(0))

opt = qml.RotosolveOptimizerTF()  # suggested optimizer
weights = qml.init.strong_ent_layers_uniform(4, 2)
weights = tf.Variable(weights)

opt.minimize(lambda: f(weights), [weights])

This may involve making a new RotosolveOptimizerTF class that has a similar API to the tf.keras.optimizers.Optimizer base class.

lumander commented 4 years ago

Can I try this one?

trbromley commented 4 years ago

Hi @lumander, absolutely - we welcome contributions from the community!

We have a PennyLane contribution guide here, but also feel free to reach out to us on this issue thread if you have any questions or need more details.

Is there any more info we could help you with now to get started?

lumander commented 4 years ago

Hi @trbromley. I am trying to integrate the rotosolve optimizer within the tensorflow interface. I think that the rotosolve optimizer is not a good fit for classic tensorflow since its internal routines do stuff with gradients while the rotosolve method is gradient free. What about integrating with tensorflow quantum? It seems that the rotosolve optimizer is already there https://github.com/tensorflow/quantum/blob/master/tensorflow_quantum/python/optimizers/rotosolve_minimizer.py

josh146 commented 4 years ago

Hi @lumander - I tend to agree. The rotoselect/rotosolve algorithms, being gradient free, are likely not as well suited to a TF implementation as the QNGOptimizer. Perhaps that could be a better starting point?

Out of curiosity, does the TensorFlow optimizer class allow for gradient-free optimization? Or is it hardcoded to always compute the gradient, even if it is not required?

What about integrating with tensorflow quantum? It seems that the rotosolve optimizer is already there

I just had a look - it appears that the TFQ rotosolve optimizer is very similar to our existing optimizer; it is simply an independent function that performs a for loop, independent of the built-in Keras style of optimization (in our case, the for loop is external and written by the user).

Perhaps, with the advent of #886, we will have a framework for modifying the existing Rotosolve optimizer to be framework independent.

lumander commented 4 years ago

I tend to agree. The rotoselect/rotosolve algorithms, being gradient free, are likely not as well suited to a TF implementation as the QNGOptimizer. Perhaps that could be a better starting point?

Yes, I'll think a little bit more and then skip to the next ;)

Out of curiosity, does the TensorFlow optimizer class allow for gradient-free optimization? Or is it hardcoded to always compute the gradient, even if it is not required?

If you want to create a custom optimizer you need to follow the steps here. Your custom methods will be called when gradients have been already evaluated. In fact, the very first step of the minimize() method is to compute_gradients. So TF always compute the gradient somehow.