Closed Strilanc closed 2 years ago
The simulators currently use np.complex64
but there are other places where the default is still np.complex128
.
The places where it needs to be changed are circuit.unitary
and circuit.final_state_vector
.
As discussed in Cirq-cync, can someone please assign me here as I don't have permissions?
@diabhiue I've assigned you.
@diabhiue Do your new changes actually improve the performance?
@vtomole, I tried running ./check/asv_run for performance tests(post changes). Initially, it asked for system info (for initial set up) which was chosen by default, but it failed while running. Below is the error message. How can I resolve this?
```zsh
(cirq-py3) ~/projects/Cirq(1030 ✗) ./check/asv_run
· No executable found for python 3.6
· No executable found for python 3.7
· Creating environments
· Discovering benchmarks
·· Uninstalling from virtualenv-py3.8
·· Building 31daa941
Adding @tanujkhattar as per git blame
I also noticed that there were some test failures before making any changes. Below are those failed tests. Are these expected?
``` =========================== short test summary info ============================ FAILED cirq-rigetti/cirq_rigetti/sampler_bell_circuit_test.py::test_bell_circuit_through_sampler FAILED cirq-rigetti/cirq_rigetti/sampler_parametric_circuit_test.py::test_parametric_circuit_through_sampler FAILED cirq-rigetti/cirq_rigetti/sampler_parametric_circuit_test.py::test_parametric_circuit_through_sampler_with_parametric_compilation FAILED cirq-rigetti/cirq_rigetti/sampler_readout_reassigned_qubits_test.py::test_readout_on_reassigned_qubits FAILED cirq-rigetti/cirq_rigetti/sampler_readout_separate_memory_regions_test.py::test_circuit_with_separate_readout_keys_through_sampler FAILED cirq-rigetti/cirq_rigetti/sampler_test.py::test_get_rigetti_qcs_sampler FAILED cirq-rigetti/cirq_rigetti/service_bell_circuit_test.py::test_bell_circuit_through_service FAILED cirq-rigetti/cirq_rigetti/service_parametric_circuit_test.py::test_parametric_circuit_through_service FAILED cirq-rigetti/cirq_rigetti/service_test.py::test_get_rigetti_qcs_service FAILED dev_tools/bash_scripts_test.py::test_pytest_changed_files_file_selection FAILED dev_tools/bash_scripts_test.py::test_pytest_changed_files_branch_selection FAILED dev_tools/bash_scripts_test.py::test_pytest_and_incremental_coverage_branch_selection FAILED dev_tools/bash_scripts_test.py::test_incremental_format_branch_selection FAILED dev_tools/bash_scripts_test.py::test_pylint_changed_files_file_selection FAILED dev_tools/docker_test.py::test_docker_stable - AssertionError: assert ... FAILED dev_tools/docker_test.py::test_docker_pre - AssertionError: assert 1 == 0 16 failed, 14660 passed, 51 skipped, 77 xfailed, 571 warnings in 582.07s (0:09:42) ```
Hey @diabhiue, I'm just seeing your ping. I'm also going to be out of office for the next 2 weeks.
Anyway, it seems like the second error is due to https://github.com/quantumlib/Cirq/issues/4660 and the first one is due to cirq's sub-packages are not installed before the benchmarks are executed. Since the benchmarks only need cirq-google
, you will need to tell asv where it is. I was able to run the benchmarks after making this change: https://github.com/quantumlib/Cirq/commit/024f6fb80bbcb5f246ff0066b994732a43c4fa09 on my local asv.conf.json
.
(venv3.7) vtomole@vtomole:~/Cirq$ ./check/asv_run
· No executable found for python 3.6
· Creating environments
· Discovering benchmarks
·· Uninstalling from virtualenv-py3.7
·· Building a75b215f <master> for virtualenv-py3.7.
·· Installing a75b215f <master> into virtualenv-py3.7.
· Running 22 total benchmarks (1 commits * 2 environments * 11 benchmarks)
[ 0.00%] · For Cirq commit a75b215f <master>:
[ 0.00%] ·· Building for virtualenv-py3.7
[ 0.00%] ·· Benchmarking virtualenv-py3.7
[ 2.27%] ··· Running (bench_examples.ExamplesTest.time_example_runs_bcs_mean_field_perf--)...........
[ 27.27%] ··· bench_examples.ExamplesTest.time_example_runs_bcs_mean_field_perf 183±1ms
[ 29.55%] ··· bench_examples.ExamplesTest.time_example_runs_bell_inequality_perf 4.46±0.02ms
[ 31.82%] ··· bench_examples.ExamplesTest.time_example_runs_bernstein_vazirani_perf 4.33±0.07ms
[ 34.09%] ··· bench_examples.ExamplesTest.time_example_runs_grover_perf 6.45±0.1ms
[ 36.36%] ··· bench_examples.ExamplesTest.time_example_runs_hello_line_perf 199±7ms
[ 38.64%] ··· bench_examples.ExamplesTest.time_example_runs_hello_qubit_perf 1.27±0.01ms
[ 40.91%] ··· bench_examples.ExamplesTest.time_example_runs_phase_estimator_perf 44.2±2ms
[ 43.18%] ··· bench_examples.ExamplesTest.time_example_runs_quantum_fourier_transform_perf 6.44±0.3ms
[ 45.45%] ··· bench_examples.ExamplesTest.time_example_runs_quantum_teleportation 5.89±0.3ms
[ 47.73%] ··· bench_examples.ExamplesTest.time_example_runs_superdense_coding 36.6±2ms
[ 50.00%] ··· bench_linalg_decompositions.time_kak_decomposition
Thanks!
Doing this for the methods on circuit seems to break a ton of tests (600+) likely because a lot of our default precisions are 1e-7 or 1e-8.
The method final_state_vector
will need to be updated to default to np.complex64
.
@dabacon Given https://github.com/quantumlib/Cirq/pull/5636, did we decide that this is not worth doing due to the frequency of the flakes? If so, then I think the only thing left is to monitor the CI for the next few days to confirm that the roll back fixed everything and then close this.
Yes we decided to keep the different precision on the Circuit class. Closing.
For NISQ circuits, the reduced precision isn't particularly relevant but the speed gains are.