Open zaqqwerty opened 3 years ago
@zaqqwerty I agree with that. The first target application was the SGDifferentiator, but StochasticCost or StochasticLoss itself is a standalone. Thank you for your research!
Perhaps there is a nice way we could work this in throughout the entire stack in TFQ. Maybe we can define an op flag for the op returned by get_sampled_expectation_op
that has bool: subsample_paulis=True/False
(https://www.tensorflow.org/guide/create_op#attrs) From there it would be simple to have the keras layers choose values for this based on user preferences. What do people think ?
In the past, subsampling of PauliTerms from input PauliSums was taken care of by an option in the SGDifferentiator module. However, there were a few issues with its implementation there:
Fortunately, in discussions on the design doc for the new differentiators, it looks like this could be a feature we still want to keep, and pull up to the level of
tfq.layers
. Since, this subsampling of PauliTerms can speed up the estimation of expectation values in any context, not just when seeking gradients. See also #230 for the possible interaction of such a feature with Engine. Thinking to take this on in the near future. Thoughts on this @MichaelBroughton @jaeyoo ?