Closed utf closed 2 years ago
&kpoint
section. KMESH_SCPH
is used only in the main iterative loop of an SCPH calculation. All post process calculations are performed with the k point information in the &kpoint
section.As you noticed, calculation of the bubble free energy is computationally demanding. It is much more expensive than calculation of the imaginary part of the bubble self-energy, which is necessary for computing thermal conductivity.
Regarding the large RAM requirement, the code first prepares all necessary temperature-dependent information (e.g., phonon frequencies, polarization vectors) on a uniform k-grid (specified in the &kpoint
section) before starting a calculation of the bubble free energy. Thus, the more temperature points you have, the larger the RAM requirement.
I have some suggestions that might help.
This issue may be avoided by reducing the temperature points of an SCPH calculation and k-point density. Please try
FE_BUBBLE = 0
. This should be feasible. FE_BUBBLE = 1
as well as a coarser k-grid and fewer temperature points. Tips
TMIN
, TMAX
, and DT
as long as the temperature points in the restart run is a subset of that of the initial run. RESTART_SCPH = 1
), the code reads SCPH dynamical matrix from the PREFIX.scph_dymat
file. For Step 2 above, I would suggest changing PREFIX
to avoid overwriting the previous results. To restart the SCPH with a new PREFIX
, please create a symbolic link, for example, as
ln -s PREFIX.scph_dymat PREFIX_new.scph_dymat
Since the bubble free energy is usually much smaller than the QHA or SCP terms, I think (SCP term with a dense k point) + (Bubble correction with a coarse k point) give a reasonably converged value of the total free energy. For example, using 30 30 30
k mesh for the SCP term (FE_BUBBLE = 0
) and 10 10 10
k mesh for the bubble term (FE_BUBBLE = 1
) would be hopefully OK. (Of course, convergence check should be done carefully for each case.)
In my experience, the convergence of the bubble free energy tends to be faster than that of the thermal conductivity.
Thank you very much for the suggestions. I hadn't realised that the bubble correction was more expensive than calculating the imaginary part of the self-energy. But it is good to hear that convergence is faster for the bubble correction.
The tips on mining the RAM requirement and performing a subset of temperatures are very useful. I am trying them on my system now. I also wonder if the bubble correction is a smooth function with temperature, in that case I can interpolate it to a denser temperature mesh – I will give this a go.
Thank you for developing this fantastic code. The features are very impressive!
Can I also ask, have you benchmarked different parallelisation strategies for the bubble correction? For example, is it best to combine openMP threads with MPI processes. Or just stick to MPI only?
I haven't done a performance check, but I think a pure MPI run is more efficient in terms of wall time.
Hi, I have some questions about using
FE_BUBBLE=1
(note, I have already enabled it using-D_FE_BUBBLE
).&kpoint
section or usingKMESH_SCPH
?I'm finding the bubble correction is very slow and uses a lot of memory even for a FCC cubic system. For example using a
&kpoint
mesh of30 30 30
the code warns it needs 300GB of memory. This seems like a lot of memory for a simple system and much less memory is needed to calculate the lattice thermal conductivity, say in phono3py.Do you have any other tips for using the bubble correction?