niazalikhan87 / qutip

Automatically exported from code.google.com/p/qutip
2 stars 1 forks source link

Steady state solver uses too much memory #11

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Use steady(L) with high dimensional Hilbert space.
2. System runs out of memory (H_dim>130 is already impossible on 8Gb macbook)

What is the expected output? What do you see instead?
Optimally it should produce a result while the density matrix and the 
Liouvillian can be fitted into the memory. The current limit of H_dim~130 on 
8Gb is very far from the optimum.

What version of the product are you using? On what operating system?
QuTiP 1.1.3, Mac OS 10.7.3

Please provide any additional information below.

As far as I understand the problem is in usage of spsolve which uses LU 
decomposition and essentially converts sparse matrix to a dense one. There are 
other methods available in scipy like bicg (BIConjugate Gradient iteration) 
which should be more memory efficient. However, I wasn't able to use it instead 
of spsolve for some reason (I'm new to python).

As a work around I iteratively use odesolve:

while (fid>tol) and (it<maxiter):
    sysoa=odesolve(H,psi0,tlist,[Ca,Cb,Cab],[])[stepN]
    fid=1-fidelity(sysoa,psi0)
    psi0=sysoa
    print fid
    it+=1

It allows to rise H_dim to 900, but still it's not very efficient. It requires 
to use two density matrices while one really needs to use only single density 
matrix to iteratively solve differential equation up to the steady state.

I tried to do simple iterations as follows

L = liouvillian(H, [Ca,Cb,Cab])
it=0
while (fidnorm>tol) and (it<maxiter):
        it1=0
        while (it1<maxiter1):
            v = v + (L.data*v)*dt
            it1 += 1
        fidnorm = la.norm((L.data*v),inf)
        it+=1
        print fidnorm

It works but during initialization it consumes huge amount of RAM for short 
time so the limit is the same as for current steady(L).

Original issue reported on code.google.com by D.Vutshi on 6 Mar 2012 at 10:17

GoogleCodeExporter commented 8 years ago

Original comment by nonhermitian on 8 Mar 2012 at 10:48

GoogleCodeExporter commented 8 years ago
This issue was with an older portion of the code that took the sparse matrix to 
dense.  The solver now keeps everything sparse where possible.  For a Hilbert 
space of 900, I have ran it on a 8Gb mac and it used roughly 850mb of memory.  
I could run an example in parallel using four threads simultaneously taking up 
roughly 4Gb of memory.

I have attached the corrected file.

Thank you for your help, your name will be added to the contributors list.

Paul

Original comment by nonhermitian on 8 Mar 2012 at 12:25

Attachments:

GoogleCodeExporter commented 8 years ago
I fotgot that you need the updated Qobj file as well.

Original comment by nonhermitian on 8 Mar 2012 at 2:09

GoogleCodeExporter commented 8 years ago
It almost worked even without Qobj. I had only one problem with

def _sp_inf_norm(op):
    return max([sum(abs(op[:,k])) for k in xrange(op.shape[1])])

I thought there was something wrong with brakets. I wasn't able to fix this 
norm, so I made another one:

def _sp_inf_norm(op):
    op=op*op.dag()
    return sqrt(abs(sum([op[k,k] for k in xrange(op.shape[1])])))

Now it works but much more slowly than I expected :).

*****************
New Qobj asks for some new module:
/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/qut
ip/Qobj.py in <module>()
     23 import types
     24 from scipy import finfo
---> 25 import qutip.settings as qset
     26 from ptrace import _ptrace
     27 class Qobj():

ImportError: No module named settings

Original comment by D.Vutshi on 8 Mar 2012 at 2:27

GoogleCodeExporter commented 8 years ago
It seems I gave you the wrong file, here it the correct one.  The norm should 
work just fine as I personally ran it last night.  It could be that there is 
some issue with the numpy version you are using.  It you are using numpy 1.5.x, 
then that is probably why.  They changed as few things in 1.6.x.  I would give 
it another try though.  Also, your new norm is not the inf-norm for the given 
operator, you are doing the trace norm.  You could just call op.norm() and get 
the same answer.

Paul

Original comment by nonhermitian on 8 Mar 2012 at 11:21

Attachments:

GoogleCodeExporter commented 8 years ago
Thank you very much. With the new Qobj everything works well. 

I know that it is not inf-norm I think it is called Frobenius norm and it is 
not the same as op.norm(). The problem is that I simulate bipartite system and 
for dimensions Na>12, Nb>12 the calculation of the inf-norm takes ages 
(op.norm() is even slower and crashes computer due to lack of RAM). So the 
Frobenius norm is veeery fast compared to other two.

One more thing. It looks like there is a typo in steady.py in line 82:
        rhoss.dims=[l.dims[0],1]

It should be L.dims I guess.

Unfortunately spsolve spoils everything for my particular system. With dense 
matrices I was limited by Na,Nb=11 now the limit is Na=12,Nb=14. Spsolve just 
runs out of memory.

Now I'm using another approach. I define L as a function (not building this 
huge superoperator L anymore) and solve the master equation iteratively. 
Memory-wise it is only limited by the size of the density matrix.

Original comment by D.Vutshi on 9 Mar 2012 at 2:30