Closed nquesada closed 4 years ago
Merging #348 into master will increase coverage by
0.01%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## master #348 +/- ##
==========================================
+ Coverage 97.68% 97.69% +0.01%
==========================================
Files 52 52
Lines 6446 6478 +32
==========================================
+ Hits 6297 6329 +32
Misses 149 149
Impacted Files | Coverage Δ | |
---|---|---|
strawberryfields/api/result.py | 100.00% <ø> (ø) |
|
strawberryfields/_version.py | 100.00% <100.00%> (ø) |
|
strawberryfields/api/connection.py | 100.00% <100.00%> (ø) |
|
strawberryfields/backends/states.py | 96.98% <100.00%> (+0.26%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 45064ae...f2e2b2e. Read the comment docs.
For some reason CodeFactor is complaining about things I did not do. Not sure how to fix it. Other than that this PR is ready for review.
Any idea why Code Factor is complaining about things I did not do? @antalszava @josh146 ?
I'd think that the Code Factor things come up since those files were edited. But they seem to have been resolved now. :slightly_smiling_face:
seems like the test failed for the batched mode of TF. I guess the code we wrote cannot work for TF batched. Is there a way to mark this?
@nquesada: You can include the batch_size
fixture:
def test(batch_size):
if batch_size is not None:
pytest.skip("Does not support batch mode")
Where am I supposed to add this? I don't have a way to test where it sould go locally since likely my laptop will die a slow dead doing batched TF computations.
If you've identified the tests which would need it, you could include batch_size
in the signature of the test case. What this means, is the following.
Take the test_number_expectation_displaced_squeezed(self, setup_backend, tol)
as an example.
If you would like this to be skipped for the batched TF case, then this becomes:
test_number_expectation_displaced_squeezed(self, setup_backend, tol, batch_size)
, and you start the test case with the two lines Josh linked. This will ensure that the test will be skipped if batch_size
was specified for the test.
Edit: the following tests seem to have errored:
test_number_expectation_vacuum
test_number_expectation_displaced_squeezed
test_number_expectation_two_mode_squeezed
test_number_expectation_four_modes
By the way, I think the method state.mean_photon
should be removed since this can now be done by passing a list with a single mode to number_expectation
. I guess the only problem is that we still don't have a way to do this in batch mode. Although I suspect supporting batching is not too difficult.
By the way, I think the method state.mean_photon should be removed since this can now be done by passing a list with a single mode to number_expectation.
I recommend keeping mean_photon
for now, for two reasons:
a lot of external code uses it
The TF backend overwrites it with a batched implementation in FockStateTF.mean_photon
.
A partial solution could be:
BaseState.mean_photon
has a default implementation that simply calls number_expectation
.
Backends are free to override mean_photon
if they have a better/different implementation.
@nquesada, this is now ready to be merged!
Adds the method
number_expectation
inBaseGaussianState
andBaseFockState
.