XanaduAI / strawberryfields

Strawberry Fields is a full-stack Python library for designing, simulating, and optimizing continuous variable (CV) quantum optical circuits.
https://strawberryfields.ai
Apache License 2.0
754 stars 191 forks source link

Number expectation #348

Closed nquesada closed 4 years ago

nquesada commented 4 years ago

Adds the method number_expectation in BaseGaussianState and BaseFockState.

codecov[bot] commented 4 years ago

Codecov Report

Merging #348 into master will increase coverage by 0.01%. The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #348      +/-   ##
==========================================
+ Coverage   97.68%   97.69%   +0.01%     
==========================================
  Files          52       52              
  Lines        6446     6478      +32     
==========================================
+ Hits         6297     6329      +32     
  Misses        149      149              
Impacted Files Coverage Δ
strawberryfields/api/result.py 100.00% <ø> (ø)
strawberryfields/_version.py 100.00% <100.00%> (ø)
strawberryfields/api/connection.py 100.00% <100.00%> (ø)
strawberryfields/backends/states.py 96.98% <100.00%> (+0.26%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 45064ae...f2e2b2e. Read the comment docs.

nquesada commented 4 years ago

For some reason CodeFactor is complaining about things I did not do. Not sure how to fix it. Other than that this PR is ready for review.

nquesada commented 4 years ago

Any idea why Code Factor is complaining about things I did not do? @antalszava @josh146 ?

antalszava commented 4 years ago

I'd think that the Code Factor things come up since those files were edited. But they seem to have been resolved now. :slightly_smiling_face:

nquesada commented 4 years ago

seems like the test failed for the batched mode of TF. I guess the code we wrote cannot work for TF batched. Is there a way to mark this?

josh146 commented 4 years ago

@nquesada: You can include the batch_size fixture:

def test(batch_size):
    if batch_size is not None:
        pytest.skip("Does not support batch mode")
nquesada commented 4 years ago

Where am I supposed to add this? I don't have a way to test where it sould go locally since likely my laptop will die a slow dead doing batched TF computations.

antalszava commented 4 years ago

If you've identified the tests which would need it, you could include batch_size in the signature of the test case. What this means, is the following.

Take the test_number_expectation_displaced_squeezed(self, setup_backend, tol) as an example.

If you would like this to be skipped for the batched TF case, then this becomes: test_number_expectation_displaced_squeezed(self, setup_backend, tol, batch_size), and you start the test case with the two lines Josh linked. This will ensure that the test will be skipped if batch_size was specified for the test.

Edit: the following tests seem to have errored:

test_number_expectation_vacuum
test_number_expectation_displaced_squeezed
test_number_expectation_two_mode_squeezed
test_number_expectation_four_modes
nquesada commented 4 years ago

By the way, I think the method state.mean_photon should be removed since this can now be done by passing a list with a single mode to number_expectation. I guess the only problem is that we still don't have a way to do this in batch mode. Although I suspect supporting batching is not too difficult.

josh146 commented 4 years ago

By the way, I think the method state.mean_photon should be removed since this can now be done by passing a list with a single mode to number_expectation.

I recommend keeping mean_photon for now, for two reasons:

  1. a lot of external code uses it

  2. The TF backend overwrites it with a batched implementation in FockStateTF.mean_photon.

A partial solution could be:

josh146 commented 4 years ago

@nquesada, this is now ready to be merged!