XanaduAI / QMLT

The Quantum Machine Learning Toolbox (QMLT) is a Strawberry Fields application that simplifies the optimization of variational quantum circuits (also known as parametrized quantum circuits).
https://qmlt.readthedocs.io
Apache License 2.0
114 stars 23 forks source link

Feature/sample from distribution tf #5

Closed mstechly closed 5 years ago

mstechly commented 6 years ago

Since the previous version of sample_from_distribution was not working with tf backend I added my version. Since there are two separate versions of sample_from_distribution for numerical and tf, I deleted helpers.py and this function into appropriate files.

co9olguy commented 6 years ago

@mstechly Thanks for the PR :)

@mariaschuld what do you think of this PR?

@mstechly I notice that the change causes test failures (probably because you moved the location of the function). Can you update the PR to make the tests pass?

Also, I believe this function is employed in the tutorials in the docs. We would also need to update those and ensure they run as expected

codecov[bot] commented 6 years ago

Codecov Report

:exclamation: No coverage uploaded for pull request base (master@7ebba7c). Click here to learn what that means. The diff coverage is 100%.

@@           Coverage Diff           @@
##             master     #5   +/-   ##
=======================================
  Coverage          ?   100%           
=======================================
  Files             ?     11           
  Lines             ?    755           
  Branches          ?      0           
=======================================
  Hits              ?    755           
  Misses            ?      0           
  Partials          ?      0
Impacted Files Coverage Δ
qmlt/tf/helpers.py 100% <100%> (ø)
qmlt/numerical/helpers.py 100% <100%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 7ebba7c...4989b8b. Read the comment docs.

co9olguy commented 6 years ago

@mstechly Great thanks! :smile: Going to assign @mariaschuld to review this one. If she decides to merge, we'll also have to update the docs, as this function appears there as well: https://qmlt.readthedocs.io/en/latest/tutorials/numerical.html?highlight=sample_from_distr https://qmlt.readthedocs.io/en/latest/tutorials/tensorflow.html?highlight=sample_from_distr

mariaschuld commented 6 years ago

Will merge and update the documentation as soon as I am back from vacation!

mstechly commented 5 years ago

@mariaschuld I shortened descriptions as you asked. Concerning the docs - I suppose there is no need for helpers.rst anymore. However, I don't know why numerical.helpers.rst and tf.helpers.rst are different - could you explain which version is better and why?

mariaschuld commented 5 years ago

Hey @mstechly! Overall looking good, but I am not entirely sure what use case you have in mind for the tf sample_from_distribution function? In particular, because I see that in the tf unsupervised tutorial you still import the numerical one, since the sampling is part of the postprocessing (when we are not dealing with tf.Tensor objects any more).

Isn't it more confusing to rely on a helper from the numerical library rather than to import it from a higher level as before? Did you need the tf version for your purpose somewhere?

If you have a use case, we can think of putting your tf version into the tf helper and leave everything else as is. Would that make sense to you?

mstechly commented 5 years ago

@mariaschuld Honestly, I'm also not sure if the solution I proposed is the best one. I was thinking where to include this code and decided that this way it's better, since I don't need to import tf in helpers.py. Another solution I was thinking about was adding a flag use_tf to the existing sample_from_distribution and put import tf and all the logic inside the if when the flag is true.

You can check out my use-case here: https://github.com/BOHRTECHNOLOGY/yellow_submarine/blob/master/maxcut_solver_tf.py , line 105.

I wanted to calculate loss function based on a single output of the circuit (bitstring). But since outputs are probabilistic, I wanted to use several outputs to calculate a mean loss for a given set of parameters (similar to batch learning). I wasn't able to achieve that with:

co9olguy commented 5 years ago

Hi @mstechly,

I'm trying to understand what the use-case is here as well. If your goal is to calculate a mean loss value, can you not do this directly from the probabilities (i.e., take all_fock_probs and weight the values by these probabilities) instead of estimating the mean empirically from a few samples?

mstechly commented 5 years ago

@co9olguy I can, but I think it's less correct from the methodological point of view.

I'm solving MaxCut problem. If I wanted to use probabilities in conjunction with my probabilities, I had to evaluate every possible solution to calculate loss. But it doesn't make sense - if I can calculate every possible solution, then I don't need my quantum circuit to solve it. Sure - I can do that for small circuits, but this solution won't scale for bigger graphs. But what I can do is evaluating every sample I draw from the distribution and that's what I needed this function for.