tensorflow / quantum

Hybrid Quantum-Classical Machine Learning in TensorFlow
https://www.tensorflow.org/quantum
Apache License 2.0
1.78k stars 571 forks source link

StochasticCost as a feature of layers #411

Open zaqqwerty opened 3 years ago

zaqqwerty commented 3 years ago

In the past, subsampling of PauliTerms from input PauliSums was taken care of by an option in the SGDifferentiator module. However, there were a few issues with its implementation there:

  1. Instead of subsampling PauliTerms from PauliSums, it actually changed the shape of the measurement tensor by subsampling PauliSums.
  2. Had a python implementation instead of a C++ op backend.
  3. Does not fit the new differentiator interface being implemented in #409 .

Fortunately, in discussions on the design doc for the new differentiators, it looks like this could be a feature we still want to keep, and pull up to the level of tfq.layers. Since, this subsampling of PauliTerms can speed up the estimation of expectation values in any context, not just when seeking gradients. See also #230 for the possible interaction of such a feature with Engine. Thinking to take this on in the near future. Thoughts on this @MichaelBroughton @jaeyoo ?

jaeyoo commented 3 years ago

@zaqqwerty I agree with that. The first target application was the SGDifferentiator, but StochasticCost or StochasticLoss itself is a standalone. Thank you for your research!

MichaelBroughton commented 3 years ago

Perhaps there is a nice way we could work this in throughout the entire stack in TFQ. Maybe we can define an op flag for the op returned by get_sampled_expectation_op that has bool: subsample_paulis=True/False (https://www.tensorflow.org/guide/create_op#attrs) From there it would be simple to have the keras layers choose values for this based on user preferences. What do people think ?