Closed nwittler closed 2 years ago
The formatting and linting test should be fixed by merging (or preferably rebasing on) latest dev
and then either amending your latest commit if you have pre-commit
installed or running black c3/
if you have black
installed in your virtual environment.
The formatting and linting test should be fixed by merging (or preferably rebasing on) latest
dev
and then either amending your latest commit if you havepre-commit
installed or runningblack c3/
if you haveblack
installed in your virtual environment.
Weird, I actually ran black manually and I have the pre-commits in the same setup I'm always using. But yes, there's a lot of rough stuff still left.
The formatting and linting test should be fixed by merging (or preferably rebasing on) latest
dev
and then either amending your latest commit if you havepre-commit
installed or runningblack c3/
if you haveblack
installed in your virtual environment.Weird, I actually ran black manually and I have the pre-commits in the same setup I'm always using. But yes, there's a lot of rough stuff still left.
You probably have an older version of black. Maybe try pip install -U black
and then black c3/
?
test_qiskit_physics
fails with an assertion error on the final readout populations. This indicates some possible change in the API of the compute_propagators()
, evaluate()
or process()
methods since the qiskit backend (mostly) only uses the public methods without relying on any specific implementation quirks.
git bisect
indicates that this failing test was first introduced in fb3be66d4035c28b8bf61905d33a027f3331b4dc , quite a lot of changes in that commit - so not sure how to pinpoint what caused the failing test.
Update 1 - test_model_learning.py
was initially giving an assertion error on not converging to the given tolerance bound but subsequently started throwing an obscure numpy
clipping error - also from the same commit as above.
Update 2 - Unsupported TF Graph operations seem to be the reason for a lot of the still failing tests (test_optim()
and test_c1_robust()
) - OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool is not allowed
and the tests pass when you remove the @tf.function
decorator for optimal control.
Update 3 - test_noise_devices
seems to be failing due to a type mismatch in what process
expects as an input signal (List[Dict[str,Any]]
) and what it gets. A similar error was fixed in e20d2d5 where the signal being provided as an input in the test code was not of the required type. However, in test_noise_devices
, the signal is not directly provided in the testing code and comes through the experiment -> propagation -> generator
call stack. Needs investigation.
This pull request introduces 1 alert when merging 41db44bc4f72be0b8abae96cccd564aca1708df8 into 7801dc2daf21499f3837ec334063b8a8c3cb4951 - view on LGTM.com
new alerts:
Merging #177 (2815cb4) into dev (0c2a80a) will increase coverage by
0.58%
. The diff coverage is92.23%
.
@@ Coverage Diff @@
## dev #177 +/- ##
==========================================
+ Coverage 72.65% 73.24% +0.58%
==========================================
Files 37 37
Lines 5559 5561 +2
==========================================
+ Hits 4039 4073 +34
+ Misses 1520 1488 -32
Impacted Files | Coverage Δ | |
---|---|---|
c3/optimizers/modellearning.py | 89.82% <ø> (-0.27%) |
:arrow_down: |
c3/qiskit/c3_backend.py | 92.92% <ø> (ø) |
|
c3/generator/devices.py | 67.99% <79.59%> (-0.05%) |
:arrow_down: |
c3/experiment.py | 77.04% <86.95%> (+1.69%) |
:arrow_up: |
c3/c3objs.py | 84.95% <91.66%> (-1.60%) |
:arrow_down: |
c3/parametermap.py | 93.16% <95.65%> (ø) |
|
c3/signal/pulse.py | 81.17% <96.00%> (+2.00%) |
:arrow_up: |
c3/signal/gates.py | 90.59% <96.42%> (-0.12%) |
:arrow_down: |
c3/generator/generator.py | 91.08% <100.00%> (-0.18%) |
:arrow_down: |
c3/libraries/algorithms.py | 55.00% <100.00%> (+0.17%) |
:arrow_up: |
... and 11 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 0c2a80a...2815cb4. Read the comment docs.
The failing notebook - two_qubit_entangling_gate.ipynb
is due to a dead kernel which might indicate scalability/compute/memory issues. I checked by running it locally and ran into the same issue. I also tried reducing the maxfun
for optimization to 10
instead of 25
and the kernel continues to crash.
There seems to be some memory leak issues (or is this a feature of the new TF graph mode computation?) since the two_qubit_entangling_gate.ipynb
now takes 12 GB (6x) more RAM and double the time compared to the version in dev
. Additionally you get the following error:
Some increased memory requirement is expected but I don't have the experience on how much is reasonable here. I suspect the response function module to be a major cause of trouble. Removing it from the example chains might improve things. I'll check.
This pull request introduces 1 alert when merging 2815cb4669b5251a5d1c36a20009607869153e3a into 52d0a12d568b6f12e24d87434459a37550fa901b - view on LGTM.com
new alerts:
What
Fixes the control flow of the simulation to allow for efficient evaluation as compiled Tensorflow function.
Why
This removes performance bottlenecks, allowing for better scaling and parallelization. Closes #74
How
Introducing one
@tf.function
decorator at the goal function level in the optimizer class. And several fixes along the way to make the code compatible with graph compilation, e.g. supported operations and function names, avoiding state dependent logic that would require retracing.Remarks
This is a proof of concept and base for discussion. Solutions are currently bolted onto the Experiment class and should probably move to a new Propagation class. Depending on the method, a number of pre-computations might be beneficial to make the pure math portion be as lean and efficient as possible.
Checklist
Please include and complete the following checklist. Your Pull Request is (in most cases) not ready for review until the following have been completed. You can create a draft PR while you are still completing the checklist. Check the Contribution Guidelines for more details. You can mark an item as complete with the
- [x]
prefixblack
andflake8
have been used to ensure styling guidelines are metnumpydoc
style