Closed josh146 closed 4 years ago
Merging #320 into tf2 will increase coverage by
0.00%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## tf2 #320 +/- ##
=======================================
Coverage 97.80% 97.80%
=======================================
Files 52 52
Lines 6366 6378 +12
=======================================
+ Hits 6226 6238 +12
Misses 140 140
Impacted Files | Coverage Δ | |
---|---|---|
strawberryfields/backends/tfbackend/circuit.py | 96.00% <100.00%> (ø) |
|
strawberryfields/ops.py | 99.00% <100.00%> (+<0.01%) |
:arrow_up: |
strawberryfields/parameters.py | 99.02% <100.00%> (+0.06%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update b1d19d7...0f48a4e. Read the comment docs.
Just checking in here. This PR continues to stand for solution 1 outlined in the original description, correct?
Yes, thanks @antalszava. Now that the v0.13.0rc is out, focus should be returned to this PR. It should be close to merging.
Left a couple of comments/questions related to testing :slightly_smiling_face:
Thanks for resolving my comments, looks great! :1st_place_medal:
Context:
The new parameter handling in the Strawberry Fields frontend processes gate parameters symbolically using SymPy. When ready to evaluate, the symbolic expression is evaluated by lambdifying it, via the provided NumPy or TensorFlow printers.
However, there is a subtlety; while most gates simply take the gate parameters and pass them directly to the backend, some of the operations transform two real parameters
(r, phi)
to a single complex valuez = r*exp(1j*phi)
.One example is the
Dgate
:While this works fine when
self.p[0]
andself.p[1]
are NumPy real arrays, issues arise when they are instead real TensorFlow tensors, as TensorFlow does not perform explicit type casting:This propagates all the way from the frontend:
There are two solutions to this:
Quick fix: simply update the SymPy expression evaluation, so that TensorFlow tensors are correctly cast to
tf.complex128
if the evaluated expression returns a complex value.Pros: very minimal change
Cons: unnecessary casting
Better long term fix: the backend API should not be passed complex values
z
that differ from the gates(r, phi)
arguments. This avoids the casting altogether, and reduces redundant parameter processing.Pros: no unnecessary casting, removes redundant processing, less complex logic
Cons: much larger change; frontend, plus all backends, must be updated.
Solution (2) should be implemented regardless, the question is whether now, or later. Note that we have previously discussed implementing solution (2) solely to reduce complicated logic, and avoid issues with unclear conventions.
Description of the change:
par_evaluate
function now accepts adtype
argument, specifying whether the free parameters in the expression should be cast to a specific NumPy dtype before the expression is evaluated. If not provided, no casting occurs. If the free parameters are TensorFlow tensors, the TensorFlow dtype equivalent is used for casting.Coherent
,Sgate
,Dgate
,BSgate
(just ther
parameter),Catstate
, andS2gate
now cast all arguments to complex before the parameter transformation, if the phase is non-zero.The
xfail
has been removed fromtest_teleportation_fidelity
Benefits:
The following now works, as expected:
Drawbacks:
Unnecessary casting for the above gates.
Solution (2) should still be implemented.