Closed sduquemesa closed 1 year ago
Really interesting! This work might be relevant: arXiv:2202.07332
I am kind of mind blown that in your proposed replacement you literally multiply 150! by a number of the order of the inverse of that and not get massive errors
Merging #351 (d510eb1) into master (6e6caf3) will not change coverage. The diff coverage is
100.00%
.
@@ Coverage Diff @@
## master #351 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 24 24
Lines 1735 1737 +2
=========================================
+ Hits 1735 1737 +2
Impacted Files | Coverage Δ | |
---|---|---|
thewalrus/fock_gradients.py | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 6e6caf3...d510eb1. Read the comment docs.
I am kind of mind blown that in your proposed replacement you literally multiply 150! by a number of the order of the inverse of that and not get massive errors
@nquesada, strangely the result of that operation seems to be machine-dependent: on some gives nan
and others is zero. Now we use the log of the expression to avoid the unstable multiplication of a very small float by a very large one.
This pushed the instabilities further — about an order of magnitude (see the graphs on the PR description).
Context: Currently The Walrus uses the recursion relations for the displacement operator found in Fast optimization of parametrized quantum optical circuits. This approach leads to numerical instabilities for large cutoff and displacement values — as pointed out by @JacobHast.
Description of the Change: This PR makes the use of the displacement operator's matrix elements with respect to the Fock basis as shown in Ordered Expansions in Boson Amplitude Operators.
Comparison plots:
D=30, cutoff=2500
Benefits: Numerically stable displacement operation.
Possible Drawbacks: From a quick benchmark, the performance of the former approach is slightly faster (
r=10
,phi=0
,cutoff=200
)Former approach: 267 µs ± 654 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
This PR: 288 µs ± 341 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Related GitHub Issues: None