Closed braised-babbage closed 2 years ago
To be clear: the speed increase comes from QVM inflating operators via how they operate on data, rather than first inflating the operation-as-data and doing a generic matmul?
To be clear: the speed increase comes from QVM inflating operators via how they operate on data, rather than first inflating the operation-as-data and doing a generic matmul?
That's the main data efficiency, along with whatever micro-optimizations the QVM has in stock.
This extends the pure state QVM to enable calculation of unitary matrices from gate programs. The mechanism is very simple: for a k-qubit program, we use an amplitudes vector of 2k qubits, with the high k qubits indexing the column of the matrix, and the low k indexing the row. Measurement is not allowed.
As a comparison with our old friend
PARSED-PROGRAM-TO-LOGICAL-MATRIX
, the performance of this is more or less similar for small programs on 2-4 qubits, and dramatically faster for programs on a larger number of qubits. Basically, what you'd expect.Although not part of this PR, I do hope to subsequently rely on the new
PARSED-PROGRAM-UNITARY-MATRIX
(though perhaps with a better name) in the quilc tests, in lieu of our current magicl matrix multiply approach.