unitaryfund / mitiq

Mitiq is an open source toolkit for implementing error mitigation techniques on most current intermediate-scale quantum computers.
https://mitiq.readthedocs.io
GNU General Public License v3.0
358 stars 157 forks source link

Improve calibration method/parameter selection #1627

Closed natestemen closed 1 year ago

natestemen commented 1 year ago

We need a smarter method to go from the calibration experiments, to recommending a error mitigation method along with parameters for that method.

More details will come later once https://github.com/unitaryfund/mitiq/pull/1614 is merged.

Misty-W commented 1 year ago

hi @natestemen and @andreamari, anything specific you have in mind for this issue, maybe left over from #1614, or should we start brainstorming first?

natestemen commented 1 year ago

Sorry for the delay on writing more details here, I wanted to get https://github.com/unitaryfund/mitiq/pull/1676 opened first to ensure nothing drastic was going to change that would affect this.

As things stand now, method/parameter selection takes place inside the best_strategy method of the Calibration class. https://github.com/unitaryfund/mitiq/blob/d12c96644a79f6a19c61c1da6b6d8cd9601c1e37/mitiq/calibration/calibration.py#L153-L165

The goal of this ticket is to come up with new methods of selecting a given strategy, from a collection of results. The structure of the results object is described in the tests via the following schema: https://github.com/unitaryfund/mitiq/blob/d12c96644a79f6a19c61c1da6b6d8cd9601c1e37/mitiq/calibration/tests/test_calibration.py#L90-L119 This is subject to change ever so slightly in https://github.com/unitaryfund/mitiq/pull/1676, but I do not believe it will in ways that will impact this work (e.g. no change in structure affecting the improvement_factor quantities). As you can see in the schema, there are two improvement factors that are available:

  1. for an entire method (ZNE, PEC, ...)
  2. for each combination of parameters

Right now we only support calibration with ZNE, so I think we should focus on parameter selection within each method to make this easier to start. This probably means only using the improvement factors for each individual strategy, and ignoring the top level one.


@Misty-W I will be OOO next Mon-Wed. Will you have any time from now until thursday to look at this/brainstorm/code? No worries if not, just gauging what I need to do for the end of this milestone. Also, if there are any further questions, or things that don't make sense in the above, let's get them clarified!

Misty-W commented 1 year ago

@Misty-W I will be OOO next Mon-Wed. Will you have any time from now until thursday to look at this/brainstorm/code? No worries if not, just gauging what I need to do for the end of this milestone. Also, if there are any further questions, or things that don't make sense in the above, let's get them clarified!

Thanks @natestemen, I do plan to work on this issue starting later today or tomorrow. Maybe we could discuss after the Mitiq meeting tomorrow?

natestemen commented 1 year ago

Yes, happy to do that!

BTW would you mind removing the quote reply in your comment? I don't think it's needed to duplicate the info, and it makes issues harder to skim (especially with a longer comment like that). If you do want to quote reply, you can always copy the specific sentence and use that to provide context.

natestemen commented 1 year ago

Discussing with Misty, we feel as though a good first step here would be to calculate an average improvement factor for each strategy, over all of the circuit types (GHZ, RB, Mirror) and use the highest average improvement factor as the decider for strategy choice.

Down the line we could do more thorough statistical analyses on each of the parameters, but we think this is a good start.