Closed j-otsuki closed 6 years ago
Hi Junya,
I apologize for a delay, I was out of commission last couple of days. I actually worked on this problem some time ago, but ultimately stopped developing this. Normally I'd expect the single component calculation to be long enough, that there should be almost no overhead from just submitting N_orb^4 jobs to the cluster, each being MPI with an appropriate number of cores.
However, if that's not an option, here's how it could be done (for the code see below):
mpi_dispatcher as well as compute method of TwoParticleGF receive a communicator as a const reference. That communicator could be created elsewhere. Normally it's mpi_comm_world, but it doesn't have to be.
Boost::mpi (although by now old) actually has an option of splitting communicators. See http://www.boost.org/doc/libs/1_66_0/doc/html/boost/mpi/communicator.html#id-1_3_26_6_5_2_1_1_3_33-bb
Basically, you can create a communicator of size 1 and attach it to TwoParticleGF. That way you achieve no parallelization in it, but can still parallelize on every component. The drawback is that this way you'd have to statically allocate nodes to components (not dynamically as mpi_dispatcher does), but it works.
This was actually implemented in a container class for TwoParticleGF : TwoParticleGFContainer. The code: https://github.com/aeantipov/pomerol/blob/master/src/pomerol/TwoParticleGFContainer.cpp#L68
Test: https://github.com/aeantipov/pomerol/blob/master/test/TwoParticleGFContainerTest.cpp
Dear Andrey,
thank you for your detailed explanation. I understand that creating a communicator of size 1 is an easy way to disable MPI. No modification of the library is necessary. Using the container class TwoParticleGFContainer seems to be a better choice. I found it is used in the sample program hubbard2d.cpp (I did not recognized because I learned only tutorial). I'll try using it.
The situation I'm working on is the following. I'm computing multi-orbital model, e.g, d-electrons. The number of orbitals is N_orb=5 or 10 without/with spin-orbit coupling, so there are 5^4 or 10^4 components (of course, some are equivalent). Now, there is a crystal field potential and rotational symmetry does not exist. Classification of states by quantum numbers of L_z or J_z does not work. Therefore, the number of jobs done in the compute method is not so many. This brings me to consider using MPI for the orbital indices outside of the compute method.
Best regards, Junya Otsuki
Junya, I see. Hopefully, the calculation won't run into memory issues by the large size of the 2PGF. I hope it works, keep me updated. Closing it for now.
Dear Andrey,
I'm computing two-particle Green's functions for multi-orbital models. As you know, it is very costly. Since two-particle GF has a lot of components in multi-orbital models (N_orb^4 where N_orb is the number of orbitals), it would be efficient to use MPI for the loop of the components. I can do it outside of the pomerol library (actually in a python wrapper), but probably at the same time, I have to disable MPI in the TwoParticleGF class.
Is my understanding correct? And, could you tell me how to do it? If I understand correctly, I need to modify only TwoParticleGF::compute() and skip using pMPI::mpi_skel in some way.
Just for your information, a similar question was raised recently in https://github.com/krivenko/pomerol2triqs/issues/5.
Best regards, Junya Otsuki