The original M3 workflow works pretty well, and provides a nice experience for users. However, every mitigation workflow has basically two steps: Calibrate then mitigate. M3 should follow this pattern as well in a manner like the following pseudo-code:
calibration = Calibration(backend)
calibration.run()
mit = M3Mitigator(calibration)
quasi = mit.run(counts, mappings)
This works nicely because using the standard 'balanced' calibrations, there is no QPU runtime difference between calibrating a full device and a subset of qubits; One can calibrate a full device by default and incur no QPU runtime overhead. It also allows for having methods related to the calibrations attached to theCalibration object. This is also advantageous because it is often of interest what the calibration data yields for error rates, how those vary with time, etc etc. Finally, it also allows for a single object that can be used in both M3 and the solver in #215.
The original workflow will be left as is, and will just call these components internally.
The original M3 workflow works pretty well, and provides a nice experience for users. However, every mitigation workflow has basically two steps: Calibrate then mitigate. M3 should follow this pattern as well in a manner like the following pseudo-code:
This works nicely because using the standard 'balanced' calibrations, there is no QPU runtime difference between calibrating a full device and a subset of qubits; One can calibrate a full device by default and incur no QPU runtime overhead. It also allows for having methods related to the calibrations attached to the
Calibration
object. This is also advantageous because it is often of interest what the calibration data yields for error rates, how those vary with time, etc etc. Finally, it also allows for a single object that can be used in both M3 and the solver in #215.The original workflow will be left as is, and will just call these components internally.