Qiskit / qiskit

Qiskit is an open-source SDK for working with quantum computers at the level of extended quantum circuits, operators, and primitives.
https://www.ibm.com/quantum/qiskit
Apache License 2.0
5.13k stars 2.35k forks source link

`ConsolidateBlocks` does not have a good logic for heterogeneous gates #11659

Open ajavadia opened 8 months ago

ajavadia commented 8 months ago

What should we add?

ConsolidateBlocks has some logic for choosing whether to collapse some blocks into a UnitaryGate. But this is pretty outdated by now. It basically checks whether number of gates in the decomposition improves. First, number of gates is not necessarily important, but rather the error is. Second, it does not currently deal with multiple (heterogeneous) possible decompositions.

But all of this is implemented correctly in UnitarySynthesis (at least for 2q blocks). So ConsolidateBlocks should just defer to UnitarySynthesis for when and how to resynthesize a sequence of 2q gates. All of its decomposition considerations should come from UnitarySynthesis.

I think it is better to write a new pass PeepholeUnitaryResynthesis, which does all 3 of these actions: Collect2QBlocks, ConsolidateBlocks, UnitarySynthesis. The logic must be consistent, so there's no point splitting these 3 stages. I believe this can replace the UnitarySynthesis pass because any Unitary can be considered a simple peephole unitary.

(** note: currently if the user knows that there's a good chance that UnitarySynthesis improves the circuit, they can force it to occur by adding [Collect2QBlocks(target=target), ConsolidateBlocks(force_consolidate=True), UnitarySynthesis(target=target)] to the passmanager, so it is possible to customize this by a user who knows how to use the passmanager)

mtreinish commented 6 months ago

I like this idea in general. I'm thinking of how it relates to #8774 (specifically the #12007 sub-task) and we can sidestep the need to add a batch mode to the unitary synthesis plugin interface by doing this all at once in multithreaded rust in a new pass.

The only question I have though is in evaluating the error for the original 2q block. I agree that we should use an estimated error heuristic to evaluate a potential decompositions and select one based on that instead of the number of gates (which is just being used a proxy for estimated error rate). But prior to synthesis there isn't a guarantee that the gates in a 2q block are in target instructions that we can query error rates on. How were you thinking we'd evaluate the block in these cases? Because I was reading this is as we we compare the error estimates for the original circuit against all the possible decompositions and pick the one which results in the lower error. I guess the answer is if the block isn't in target native instructions we always need to synthesize so in those cases we pick the lowest error decomposition?