jcmgray / quimb

A python library for quantum information and many-body calculations including tensor networks.
http://quimb.readthedocs.io
Other
484 stars 108 forks source link

minimization of expectation value #227

Open ValentinKasper opened 5 months ago

ValentinKasper commented 5 months ago

minimization of expectation value

The following minimal example

import quimb as qu
import quimb.tensor as qtn

L = 3
Z = qu.pauli('X')

bond_dim = 4
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)

def normalize_state(psi):
    return psi / (psi.H @ psi) ** 0.5

def expectation_val(psi):
    return - (psi.H @ psi.gate(Z,1)) ** 2  

optmzr = qtn.TNOptimizer(
    mps,                                
    loss_fn=expectation_val,
    norm_fn=normalize_state,
    autodiff_backend='torch',      
    optimizer='L-BFGS-B',               
)

mps_opt = optmzr.optimize(100) 

leads to the error

TypeError: tensordot(): argument 'other' (position 2) must be Tensor, not numpy.ndarray

Unforturnately, I am unable to track it down. Can you please help? Thanks a lot

jcmgray commented 5 months ago

The problem is just that Z is a numpy array whereas the tensors during the optimization are torch tensors, which are not compatible.

The 'proper' way to handle this is to add Z as a argument of your function rather than leave it as a closure, then supply loss_constants={"Z": Z}, which lets quimb know which things need to be converted to whichever backend.

You could also just convert Z to a torch tensor yourself, if you are always going to use the torch backend!

ValentinKasper commented 5 months ago

Thank you so much for your fast reply, if I understand you correctly you suggest:

import quimb as qu
import quimb.tensor as qtn

L = 3
Z = qu.pauli('Z')

bond_dim = 4
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)

def normalize_state(psi):
    return psi / (psi.H @ psi) ** 0.5

def expectation_val(psi, Z):
    return - (psi.H @ psi.gate(Z,1)) ** 2  

optmzr = qtn.TNOptimizer(
    mps,                                
    loss_fn=expectation_val,
    norm_fn=normalize_state,
    loss_constants={"Z": Z},
    autodiff_backend='torch',      
    optimizer='L-BFGS-B',               
)

mps_opt = optmzr.optimize(100) 

This leads to the error

RuntimeError: both inputs should have same dtype

I understand that qu.pauli('Z') is a numpy array. I will try out the pure pytorch solution you suggest as well.

Let me know your comments.

jcmgray commented 5 months ago

Hi @ValentinKasper, yes for torch you just need to make the arrays in all the tensors the same dtype, (or explicitly cast them when necessary). If your hamiltonian is real, then you can just supply e.g. qu.pauli('z', dtype="float64"). If complex, you would instead change the dtype of the TN, though the loss should always be real, meaning you might have to take the real part explicitly.