Open jpmoutinho opened 1 year ago
Code snippet to save for later:
from qadence import *
from qadence.blocks import MatrixBlock
import torch
op_support = (0, 2)
lookup_support = (2, 0)
# An example operation
op_block = CNOT(op_support[0], op_support[1])
# The matrix block that should be completely equivalent to the operation
matrix = block_to_tensor(op_block)
matrix_block = MatrixBlock(matrix, qubit_support = op_support)
# Looking at both through block_to_tensor
print(block_to_tensor(matrix_block, qubit_support = lookup_support))
print(block_to_tensor(op_block, qubit_support = lookup_support))
IMO with a setup like this when running block_to_tensor
with a different lookup_support
we should get the same final matrix, but we currently don't.
I found this bug using the code bellow on main
:
import torch
import qadence
from qadence.blocks.matrix import MatrixBlock
XMAT = torch.tensor([[0, 1], [1, 0]], dtype=torch.cdouble)
matblock = MatrixBlock(XMAT, (0,))
print(qadence.run(matblock))
Returns:
Traceback (most recent call last):
File "qadence/test.py", line 82, in <module>
print(qadence.run(matblock))
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/functools.py", line 909, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "qadence/execution.py", line 101, in _
return run(Register(n_qubits), block, **kwargs)
^^^^^^^^^^^^^^^^^^
File "qadence/register.py", line 73, in __init__
self.graph = support if isinstance(support, nx.Graph) else alltoall_graph(support)
^^^^^^^^^^^^^^^^^^^^^^^
File "qadence/register.py", line 379, in alltoall_graph
graph = nx.complete_graph(n_qubits)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<class 'networkx.utils.decorators.argmap'> compilation 4", line 3, in argmap_complete_graph_1
File "Qadence_venv/lib/python3.11/site-packages/networkx/utils/backends.py", line 633, in __call__
return self.orig_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<class 'networkx.utils.decorators.argmap'> compilation 8", line 3, in argmap_complete_graph_5
File "Qadence_venv/lib/python3.11/site-packages/networkx/utils/decorators.py", line 255, in _nodes_or_number
nodes = tuple(n)
^^^^^^^^
TypeError: 'numpy.float64' object is not iterable
@Roland-djee , @jpmoutinho.
Thanks @EthanObadia. Do you need MatrixBlock
? This is another pandora's box to be opened...
This is to declare any random matrix as a unitary operator I presume, I think I would need this. Isnt the output from an Hamevol for a general hamiltonian a matrixblock ?
In a recent MR some issues were found in
MatrixBlock
andblock_to_tensor
. Few things to check:MatrixBlock
allows any size matrix (e.g., 3x3), butqubit_support
must be given, meaning the size should only be a power of 2.block_to_tensor
on aMatrixBlock
the identity filling was not being done properly, currently disabled it just returns the matrix that was used to build theMatrixBlock
.