TRIQS / triqs_0.x

DEPRECATED -- This is the repository of the older versions of TRIQS
Other
11 stars 9 forks source link

scalability -- mpi parallelization #62

Open mhoeppner opened 12 years ago

mhoeppner commented 12 years ago

I got a few questions regarding the implemented parallelism in triqs: (1) I linked triqs successfully against the intel mpi and mkl (all tests pass). But if I try to run any of the examples in parallel mode (mpirun -np 2 ...) -- even only on one machine -- the task is started in parallel (e.g. twice), but does not seem to share any information -- so it runs n-times the same task. Do you have any suggestions, where I should look for the error?

(2) I had a short look at the source code, and ask myself what parts of triqs are parallelized. As far as I have got, I think the implemented solver routines are parallized (e.g. hybridization expansion - CT QMC). Am I right?

Thanks for our support, Marc

mferrero commented 12 years ago

Hi Marc. There is only a quite small subset of TRIQS that will run in parallel without an explicit input of the user. Basically:

1) The CTQMC solver. It is a Monte Carlo algorithm and will deploy over the nodes if it is run in parallel. 2) The sums over k-points. The k-sums in the Wien2TRIQS modules and in Base/SumK/SumK_Discrete.py will be split over the nodes. The same is true for the Hilbert transform in Base/DOS/Hilbert_Transform.py

Except for the two cases above, the parallelism has to be taken care of by the user. This is made easier with the pytriqs.Base.Utility.MPI module (which is not described yet in the documentation, sorry). I think you should catch a look at the module. You will see that it has the usual MPI commands like bcast or send, that you can apply on essentially any python object. For example, the following script would read a Green's function on the master node and broadcast it to the other nodes:

from pytriqs.Base.GF_Local import *
from pytriqs.Base.Archive import *
from pytriqs.Base.Utility import MPI

G = GFBloc_ImFreq(Indices=[1], Beta=100)
if MPI.IS_MASTER_NODE():
  A = HDF_Archive("my_archive.h5")
  G = A['Green']
G <<= MPI.bcast(G)

There is another simple example where MPI is used to write in an archive in the documentation:

http://ipht.cea.fr/triqs/doc/user_manual/solvers/dmft/dmft.html

Hope this helps!