Open trbromley opened 4 years ago
Can I try this one?
Hi @lumander, absolutely - we welcome contributions from the community!
We have a PennyLane contribution guide here, but also feel free to reach out to us on this issue thread if you have any questions or need more details.
Is there any more info we could help you with now to get started?
Hi @trbromley. I am trying to integrate the rotosolve optimizer within the tensorflow interface. I think that the rotosolve optimizer is not a good fit for classic tensorflow since its internal routines do stuff with gradients while the rotosolve method is gradient free. What about integrating with tensorflow quantum? It seems that the rotosolve optimizer is already there https://github.com/tensorflow/quantum/blob/master/tensorflow_quantum/python/optimizers/rotosolve_minimizer.py
Hi @lumander - I tend to agree. The rotoselect/rotosolve algorithms, being gradient free, are likely not as well suited to a TF implementation as the QNGOptimizer
. Perhaps that could be a better starting point?
Out of curiosity, does the TensorFlow optimizer class allow for gradient-free optimization? Or is it hardcoded to always compute the gradient, even if it is not required?
What about integrating with tensorflow quantum? It seems that the rotosolve optimizer is already there
I just had a look - it appears that the TFQ rotosolve optimizer is very similar to our existing optimizer; it is simply an independent function that performs a for loop, independent of the built-in Keras style of optimization (in our case, the for loop is external and written by the user).
Perhaps, with the advent of #886, we will have a framework for modifying the existing Rotosolve optimizer to be framework independent.
I tend to agree. The rotoselect/rotosolve algorithms, being gradient free, are likely not as well suited to a TF implementation as the QNGOptimizer. Perhaps that could be a better starting point?
Yes, I'll think a little bit more and then skip to the next ;)
Out of curiosity, does the TensorFlow optimizer class allow for gradient-free optimization? Or is it hardcoded to always compute the gradient, even if it is not required?
If you want to create a custom optimizer you need to follow the steps here. Your custom methods will be called when gradients have been already evaluated. In fact, the very first step of the minimize() method is to compute_gradients. So TF always compute the gradient somehow.
PennyLane currently supports quantum-aware optimizers including QNGOptimizer and RotosolveOptimizer. However, use of these optimizers is restricted to the NumPy interface and is not available when using the PyTorch or TensorFlow interfaces.
This issue requests that quantum-aware optimizers be made available in the PyTorch and TensorFlow interfaces. For example, currently a user could do the following in the NumPy interface:
It would be nice to do the following in the TF interface (not currently available):
This may involve making a new
RotosolveOptimizerTF
class that has a similar API to the tf.keras.optimizers.Optimizer base class.