Closed EthanObadia closed 1 month ago
Thanks @EthanObadia. I would suggest that the promote_operator
function should also use a similar signature of (op: Tensor, op_qs: tuple, target_qs: tuple)
. So it compares the qubit support of the current operator and fills in the identities to have it match the target qubit support.
@vytautas-a this issue was opened by @EthanObadia and it is related to what I had in mind, but I will need to add a few more details.
@jpmoutinho , I’m noting a few remarks here that might already be on your radar, related to operator multiplication in pyq.
First, the apply_density_mat
function. Currently, it relies on torch.einsum
. In the future, it might be good for it to depend directly on the operator_product
function so that it can directly expand the operators, resulting in something like operator_product(op, operator_product(rho, op_dagger))
. The idea here was mainly to streamline the syntax in the forward
functions.
Second, the expand_operator
function currently uses torch.kron
. It might be great to use the operator_kron
function since it was designed for the special shape
of our tensor
.
Closing as completed https://github.com/pasqal-io/pyqtorch/pull/268
Issue highjacked by @jpmoutinho:
The goal of this issue should be to design a function that multiplies two operators acting on a different qubit support without explicitly padding the smaller operator with identities. This should be possible, and it should follow a similar logic to what the
apply_operator
function does. There, it multiplies an "small"operator
on a "large"state
by some smart reshaping and einsumming. The new function should do it for multiplying a "small"operator
on a "large"operator
.This assumes that the
qubit_support
of the small operator is a subset of thequbit_support
of the large operator. There is also the case where they only partially overlap. Then there is the "easy" case where they are disjoint and the solution is simply to kron them.Previous text: We need to modify the
operator_product
function to generalize it and make it more versatile for our needs, particularly for its use in thetensor
method of theSequence
class. Currently,operator_product
is highly focused on the multiplication between a density matrix and an operator, which limits the generality of this function.operator_product
will be modified to remove thetarget
attribute and instead accept the qubit supports for both operators as inputs. This means that the function logic will be change to operate based on the provided qubit supports, ensuring that it can handle a more general set of operator multiplications:Modification of
promote_operator
:In conjunction with the changes to
operator_product
, we also need to modify thepromote_operator
function. The aim is to eliminate issue #183 related to dimension mismatches, which currently hinder the creation of circuits with multiple qubit gates, especially in the presence of noise. The updated PromoteOperator will ensure that our circuits can handle multiple qubit gates effectively, even under noisy conditions.