jcmgray / quimb

A python library for quantum information and many-body calculations including tensor networks.
http://quimb.readthedocs.io
Other
455 stars 107 forks source link

TNOptimizer not optimizing a custom MPO with a target U #224

Closed saurabh-shringarpure closed 2 months ago

saurabh-shringarpure commented 3 months ago

What is your issue?

I am using these functions modified from the docs:

for normalizing mpo:

def normalize_op(mpo):
    mpo /= mpo.norm()
    return mpo

for loss:

def negative_overlap(mpo, U):
    return - abs((mpo.H & U).contract(all, optimize='auto-hq')) / np.sqrt(2**n)

I get the following error when trying to use TNOptimizer with a custom MPO:

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
Cell In[25], [line 9](vscode-notebook-cell:?execution_count=25&line=9)
      [1](vscode-notebook-cell:?execution_count=25&line=1) optmzr = qtn.TNOptimizer(
      [2](vscode-notebook-cell:?execution_count=25&line=2)     tn_guess,                           # our initial input, the tensors of which to optimize
      [3](vscode-notebook-cell:?execution_count=25&line=3)     loss_fn=negative_overlap,
   (...)
      [7](vscode-notebook-cell:?execution_count=25&line=7)     optimizer='adam',                   # supplied to scipy.minimize ex: 'L-BFGS-B'
      [8](vscode-notebook-cell:?execution_count=25&line=8) )
----> [9](vscode-notebook-cell:?execution_count=25&line=9) mpo_opt = optmzr.optimize(100)

File [c:\Users\user\.conda\envs\quimb\Lib\site-packages\quimb\tensor\optimize.py:1405](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1405), in TNOptimizer.optimize(self, n, tol, jac, hessp, optlib, **options)
   [1372](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1372) def optimize(
   [1373](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1373)     self, n, tol=None, jac=True, hessp=False, optlib="scipy", **options
   [1374](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1374) ):
   [1375](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1375)     """Run the optimizer for ``n`` function evaluations, using by default
   [1376](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1376)     :func:`scipy.optimize.minimize` as the driver for the vectorized
   [1377](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1377)     computation. Supplying the gradient and hessian vector product is
   (...)
   [1403](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1403)     tn_opt : TensorNetwork
   [1404](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1404)     """
-> [1405](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1405)     return {
   [1406](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1406)         "scipy": self.optimize_scipy,
   [1407](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1407)         "nlopt": self.optimize_nlopt,
   [1408](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/quimb/tensor/optimize.py:1408)     }[optlib](n=n, tol=tol, jac=jac, hessp=hessp, **options)
...
   [5981](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/tensorflow/python/framework/ops.py:5981) def raise_from_not_ok_status(e, name) -> NoReturn:
   [5982](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/tensorflow/python/framework/ops.py:5982)   e.message += (" name: " + str(name if name is not None else ""))
-> [5983](file:///C:/Users/user/.conda/envs/quimb/Lib/site-packages/tensorflow/python/framework/ops.py:5983)   raise core._status_to_exception(e) from None

InvalidArgumentError: cannot compute MatMul as input #1(zero-based) was expected to be a double tensor but is a complex128 tensor [Op:MatMul] name:

Custom MPO and target U:

n = 6
gate2 = 'CZ'

# the hamiltonian
H = qu.ham_ising(n, jz=1.0, bx=0.7, cyclic=False)

# the propagator for the hamiltonian
t = 2
U_dense = qu.expm(-1j * t * H)

# 'tensorized' version of the unitary propagator
U = qtn.Tensor(
    data=U_dense.reshape([2] * (2 * n)),
    inds=[f'k{i}' for i in range(n)] + [f'b{i}' for i in range(n)],
    tags={'U_TARGET'}
)
U.draw(color=[ 'U_TARGET','MPO'])

chi = [2,2,3,3,2,2]

d = 2
tn_guess = qtn.TensorNetwork([
    qtn.Tensor(np.random.normal(size=(d, d, chi[0], chi[1])), inds=(f'b{0}',f'k{0}',f'l{0}',f'l{1}' ),tags={'MPO'}),
    qtn.Tensor(np.random.normal(size=(d, d, chi[1], chi[2])), inds=(f'b{1}',f'k{1}',f'l{1}',f'l{2}' ),tags={'MPO'}),
    qtn.Tensor(np.random.normal(size=(d, d, chi[2], chi[3])), inds=(f'b{2}',f'k{2}',f'l{2}',f'l{3}' ),tags={'MPO'}),
    qtn.Tensor(np.random.normal(size=(d, d, chi[3], chi[4])), inds=(f'b{3}',f'k{3}',f'l{3}',f'l{4}' ),tags={'MPO'}),
    qtn.Tensor(np.random.normal(size=(d, d, chi[4], chi[5])), inds=(f'b{4}',f'k{4}',f'l{4}',f'l{5}' ),tags={'MPO'}),
    qtn.Tensor(np.random.normal(size=(d, d, chi[5], chi[0])), inds=(f'b{5}',f'k{5}',f'l{5}',f'l{0}' ),tags={'MPO'})
])
tn_guess.draw(color=[ 'U_TARGET','MPO'])

(tn_guess.H & U).draw(color=['U_TARGET','MPO'])
jcmgray commented 2 months ago

So tensorflow is a bit annoying about requiring tensor types match even when how to cast them is obvious. Can you try explicitly casting (you can call tn.astype_()) to make sure all the input TNs have the exact same dtype, both the target and constant ones?

saurabh-shringarpure commented 2 months ago

I tried to cast with tn_guess.astype_(U) but now I get another similar error: TypeError:xandymust have the same dtype, got tf.float64 != tf.complex128.

jcmgray commented 2 months ago

The argument to ‘astype’ should be a dtype specifier, like ‘complex128’, or you could inherit dynamically using ‘astype(U.dtype)’.

saurabh-shringarpure commented 2 months ago

The error about type mismatch persists even with tn.astype_(U.dtype). Are there any additional constraints on tn for input to TNOptimizer?