TRIQS / tprf

TPRF: The Two-Particle Response Function tool box for TRIQS
https://triqs.github.io/tprf
Other
14 stars 12 forks source link

Problem in calculating susceptibilities for multi-band metal #36

Open ParthaSarathiRana opened 1 year ago

ParthaSarathiRana commented 1 year ago

Hi, I tried to calculate susceptibility (Lindhard and RPA) for a multiband metal but got an error as following:

Warning: could not identify MPI environment! Starting serial run at: 2022-09-26 00:59:34.973154 num_wann = 28 (700, 28) Segmentation fault (core dumped) Can you please help with this problem?

Can the linearized Eliashberg equation be solved for a system having several bands (e.g. this system) with turf codes?

Can Eliashberg calculations be done with MPI on multiprocessors? I got different results for single and multicore processes with Eliashberg gap solving on a square lattice, given in the documentation.

I am sharing the necessary files for the multiband metallic system.

wann_tprf.py.txt AuBe.wout.txt AuBe-bands.dat.txt AuBe_hr.dat.txt

Thanks, Partha

HugoStrand commented 1 year ago

Dear Partha,

Thank you for reaching out. I have tested running your script with N_k = 2^3, and it runs just fine.

I think the problem is that the calculation runs out of memory. Currently the script is trying to compute the susceptibility for a N_k=32^3 k-point grid and for the N_w = 28 Wannier bands, the generalized susceptibity will have N_w^4 * N_k = 10^12 elements. I estimate that this would require around 1 TB (terra byte) of ram to store.

This is an example on how the quartic scaling with respect to the number of orbitals puts hard constraints on what calculations currently can be done.

Regarding your other questions, could you please provide a guide how to reproduce the MPI issue you observe? A script and a description of the steps taken to run single and multi core calculations and how the results differ would be fantastic. Please consider posting separate questions as separate issues, this makes it easier to handle multiple issues in parallel.

Thank you for being an active user.

Best regards, Hugo