Closed yaelbh closed 1 year ago
Haggai suggests that qiskit-experiments will provide a class, let's name it MultiJobExperiment
. MultiJobExperiment
will inherit from BatchExperiment
. It will be similar to BatchExperiment
, with the difference that different sub-experiments will be routed to different jobs.
Specifically for our use case of RB on all device qubits, both solutions will work: a num_job
parameter, as suggested above, or a MultiJobExperiment
. These two suggestions however are not equivalent, and for each there may be use cases that it covers while the other one does not. Haggai says that he has an additional concrete use case where MultiJobExperiment
will help him, I'll ask him to detail it here.
I tried to route experiments to different jobs without writing a new expeiment class, in this spirit:
from qiskit import IBMQ
from qiskit_experiments.framework import ParallelExperiment, ExperimentData, CompositeAnalysis
from qiskit_experiments.library.randomized_benchmarking import StandardRB
IBMQ.load_account()
provider = IBMQ.get_provider(hub="ibm-q-internal", group="dev-qiskit", project="ignis")
backend = provider.backend.ibm_bangkok
basis_gates = ["rz", "sx", "cx"]
transpiler_options = {
"basis_gates": basis_gates,
"optimization_level": 1,
}
#qubit_groups = [[0, 1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14], [15, 16, 17, 18, 19, 20, 21]]
qubit_groups = [[0], [1]]
lengths=list(range(20, 201, 20))
pardatalist = []
set_of_qubits = []
for group in qubit_groups:
set_of_qubits.extend(group)
exps = []
for qubit in group:
exp = StandardRB(
qubits=[qubit],
lengths=lengths,
seed=123,
backend=backend,
num_samples=3)
exp.analysis.set_options(gate_error_ratio=None, plot_raw_data=False)
exps.append(exp)
parexp = ParallelExperiment(exps, flatten_results=True)
parexp.set_transpile_options(**transpiler_options)
pardata = parexp.run(backend=backend)
pardatalist.append(pardata)
expdata = ExperimentData(backend=backend)
expdata.experiment_type = "Hi Haggai"
expdata.share_level = "project"
expdata.metadata["physical_qubits"] = set_of_qubits
for pardata in pardatalist:
pardata.block_for_results()
expdata.add_jobs(pardata.jobs())
expdata.add_analysis_results(pardata.analysis_results())
figs = []
for fig_id in range(len(pardata.figure_names)):
figs.append(pardata.figure(fig_id))
expdata.add_figures(figs, figure_names=pardata.figure_names)
expdata.save()
This code snippet is not working, because we're trying to save results whose experiments are not recognized:
Unable to save the experiment data: Traceback (most recent call last):
File "/home/yaelbh/.local/lib/python3.8/site-packages/qiskit_experiments/framework/analysis_result.py", line 225, in save
self.service.create_or_update_analysis_result(
File "/home/yaelbh/.local/lib/python3.8/site-packages/qiskit_ibm_experiment/service/ibm_experiment_service.py", line 787, in create_or_update_analysis_result
return self.create_or_update(
File "/home/yaelbh/.local/lib/python3.8/site-packages/qiskit_ibm_experiment/service/ibm_experiment_service.py", line 1348, in create_or_update
result = create_func(**params)
File "/home/yaelbh/.local/lib/python3.8/site-packages/qiskit_ibm_experiment/service/ibm_experiment_service.py", line 746, in create_analysis_result
response = self._api_client.analysis_result_create(
File "/usr/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/home/yaelbh/.local/lib/python3.8/site-packages/qiskit_ibm_experiment/service/utils.py", line 44, in map_api_error
raise IBMApiError(
qiskit_ibm_experiment.exceptions.IBMApiError: 'Failed to process the request: The server responded with \'400 Client Error: Bad Request for url: https://resultsdb.quantum-computing.ibm.com/analysis_results. {"errors":["Experiment 2e650001-6e12-4b4c-8613-c486eeef3e76 does not exist"]}\''
The way to overcome it is to use CompositeAnalysis
, which in turn expects metadata to contain information coming from the sub-experiments. Providing all this metadata to the composite analysis is de-facto a re-implementation of CompsiteExperiment
, hence we fall back to writing a new experiment class like MultiJobExperiment
.
Thanks @yaelbh - the use case I have besides RB are some characterization experiments that contain many circuits (such as whole device Ramsey sequences on disconnected qubits + ZZ on non-neighboring edges), amounting to more than the limitation of 300 circuits per job. It then makes sense to divide the circuits into sub-jobs manually in order to have control over their simultaneous execution.
In a meeting we agreed that I'll open two separate PRs:
max_circuits
(name is consistent with a similar VQE option) for BaseExperiment
- maximum number of circuits in a job.different_jobs
(can you think of a better name?) for BatchExperiment
- whether to run circuits of different sub-circuits in different jobs.Thank you. Perhaps separate_jobs
is slightly better a name.
The run options are actually pretty much dedicated to options that pass to backend.run
. Therefore max_circuits
and separate_jobs
will be experiment options.
This issue is related to #897.
Suppose that the user knows that she wants to split her circuits to 10 jobs. Currently she can inherit from the experiment and override
_run_jobs
. Do you think we can add this option, in a nice way, to the experiment interface? Maybe as an experiment option namednum_jobs
? Then users will not have to write their own code for_run_jobs
.