QMCPACK / qmcpack

Main repository for QMCPACK, an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids with full performance portable GPU support
http://www.qmcpack.org
Other
298 stars 139 forks source link

Issues of pw2qmcpack.x #2310

Closed xuzpgroup closed 4 years ago

xuzpgroup commented 4 years ago

Dear all,

Now I am runing the example dft-inputs-polarized in qmcpack-3.8.0.

When I run pw2qmcpack, it returns as follow,


 Program pw2qmcpack v.6.4.1 starts on 14Feb2020 at 15: 4:33 

 This program is part of the open-source Quantum ESPRESSO suite
 for quantum simulation of materials; please cite
     "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
     "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
      URL http://www.quantum-espresso.org", 
 in publications or presentations arising from this work. More details at
 http://www.quantum-espresso.org/quote

 Parallel version (MPI), running on     1 processors

 MPI processes distributed on     1 nodes

 Reading data from directory:
 ./LiH.save/

 IMPORTANT: XC functional enforced from input :
 Exchange-correlation      = PZ ( 1  1  0  0 0 0)
 Any further DFT definition will be discarded
 Please, verify this is what you really want

 G-vector sticks info
 --------------------
 sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
 Sum        3115    3115    847               115339   115339   16145

 Generating pointlists ...
 new r_m :   0.2063 (alat units)  1.4644 (a.u.) for type    1
 new r_m :   0.2063 (alat units)  1.4644 (a.u.) for type    2

=================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 0 PID 17981 RUNNING AT c02b01n09 = KILLED BY SIGNAL: 11 (Segmentation fault)

Can you help me?

Best, Justin

ye-luo commented 4 years ago

Did you run the scf step with pw.x first? Were you able to run pw.x with out error? Is your pw.x version 6.4.1 as well?

xuzpgroup commented 4 years ago

I run scf and nscf using pw.x firstly without errors. The version of pw.x is 6.4.1 as well.

By the way, the version of HDF5 is 1.10.5.

xuzpgroup commented 4 years ago

Did you run the scf step with pw.x first? Were you able to run pw.x with out error? Is your pw.x version 6.4.1 as well?

Now I install on my personal computer using same version. It returns,

 G-vector sticks info
 --------------------
 sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
 Sum        3115    3115    847               115339   115339   16145

 Generating pointlists ...
 new r_m :   0.2063 (alat units)  1.4644 (a.u.) for type    1
 new r_m :   0.2063 (alat units)  1.4644 (a.u.) for type    2

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Error in routine read_rhog (1): error reading file ./out/LiH.save/charge-density %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

I gauss the problem may from reading charge-density.

ye-luo commented 4 years ago

Could you verify that your prefix string in both pw.x input and pw2qmcpack input files are consistent? And outdir string

Your first run prints ./LiH.save/ Your second run prints ./out/LiH.save/charge-density outdir seem different.

xuzpgroup commented 4 years ago

Could you verify that your prefix string in both pw.x input and pw2qmcpack input files are consistent? And outdir string

Your first run prints ./LiH.save/ Your second run prints ./out/LiH.save/charge-density outdir seem different.

I am sorry to confusion. Here, I change the prefix for these two runnings. But both retun wrong.

The first one is ran on cluster and the second is ran on my PC.

ye-luo commented 4 years ago

Could you attach your nscf input and output and your pw2qmcpack input and output files on your PC?

xuzpgroup commented 4 years ago

nscf.in &control calculation = 'nscf' restart_mode='from_scratch', tstress = .true. prefix='LiH', pseudo_dir = './', outdir='./out' wf_collect=.true. !disk_io='low' / &system ibrav=2, celldm(1) =7.100, nat= 2, ntyp= 2, nspin=2, tot_magnetization = 0, degauss=0.001, smearing='mp', occupations='smearing', ecutwfc = 450 ecutrho =1800 nosym=.true. noinv=.true. / &electrons conv_thr = 1.0d-10 mixing_beta = 0.7 / ATOMIC_SPECIES Li 9.01 Li.ncpp H 1.01 H.ncpp ATOMIC_POSITIONS Li 0.00 0.00 0.00 H 0.50 0.50 0.50 K_POINTS {crystal} 2 0.0 0.0 0.0 2.0 0.5 0.0 0.0 2.0

nscf.out

Program PWSCF v.6.4.1 starts on 14Feb2020 at 18:55:17

 This program is part of the open-source Quantum ESPRESSO suite
 for quantum simulation of materials; please cite
     "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
     "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
      URL http://www.quantum-espresso.org", 
 in publications or presentations arising from this work. More details at
 http://www.quantum-espresso.org/quote

 Parallel version (MPI), running on    25 processors

 MPI processes distributed on     1 nodes
 R & G space division:  proc/nbgrp/npool/nimage =      25
 Waiting for input...
 Reading input from standard input
 Message from routine read_cards :
 DEPRECATED: no units specified in ATOMIC_POSITIONS card
 Message from routine read_cards :
 ATOMIC_POSITIONS: units set to alat

 Current dimensions of program PWSCF are:
 Max number of different atomic species (ntypx) = 10
 Max number of k-points (npk) =  40000
 Max angular momentum in pseudopotentials (lmaxx) =  3

 Atomic positions and unit cell read from directory:
 ./out/LiH.save/

 Subspace diagonalization in iterative solution of the eigenvalue problem:
 a serial algorithm will be used

 Parallelization info
 --------------------
 sticks:   dense  smooth     PW     G-vecs:    dense   smooth      PW
 Min         124     124     33                 4612     4612     644
 Max         125     125     34                 4617     4617     647
 Sum        3115    3115    847               115339   115339   16145

 bravais-lattice index     =            2
 lattice parameter (alat)  =       7.1000  a.u.
 unit-cell volume          =      89.4777 (a.u.)^3
 number of atoms/cell      =            2
 number of atomic types    =            2
 number of electrons       =         4.00 (up:   2.00, down:   2.00)
 number of Kohn-Sham states=            6
 kinetic-energy cutoff     =     450.0000  Ry
 charge density cutoff     =    1800.0000  Ry
 Exchange-correlation      = PZ ( 1  1  0  0 0 0)

 celldm(1)=   7.100000  celldm(2)=   0.000000  celldm(3)=   0.000000
 celldm(4)=   0.000000  celldm(5)=   0.000000  celldm(6)=   0.000000

 crystal axes: (cart. coord. in units of alat)
           a(1) = (  -0.500000   0.000000   0.500000 )  
           a(2) = (   0.000000   0.500000   0.500000 )  
           a(3) = (  -0.500000   0.500000   0.000000 )  

 reciprocal axes: (cart. coord. in units 2 pi/alat)
           b(1) = ( -1.000000 -1.000000  1.000000 )  
           b(2) = (  1.000000  1.000000  1.000000 )  
           b(3) = ( -1.000000  1.000000 -1.000000 )  

 PseudoPot. # 1 for Li read from file:
 ./Li.ncpp
 MD5 check sum: 65de287d48393da5fb638c7e9795c6d9
 Pseudo is Norm-conserving, Zval =  3.0
 Generated by old ld1 code (numerical format)
 Using radial grid of 1104 points,  0 beta functions with: 

 PseudoPot. # 2 for H  read from file:
 ./H.ncpp
 MD5 check sum: ca77a6b8a95f401fb520b57e97612fbc
 Pseudo is Norm-conserving, Zval =  1.0
 Generated by old ld1 code (numerical format)
 Using radial grid of 1076 points,  0 beta functions with: 

 atomic species   valence    mass     pseudopotential
    Li             3.00     9.01000     Li( 1.00)
    H              1.00     1.01000     H ( 1.00)

 Starting magnetic structure 
 atomic species   magnetization
    Li           0.000
    H            0.000

 No symmetry found

Cartesian axes

 site n.     atom                  positions (alat units)
     1           Li  tau(   1) = (   0.0000000   0.0000000   0.0000000  )
     2           H   tau(   2) = (   0.5000000   0.5000000   0.5000000  )

 number of k points=     2  Methfessel-Paxton smearing, width (Ry)=  0.0010
                   cart. coord. in units 2pi/alat
    k(    1) = (   0.0000000   0.0000000   0.0000000), wk =   0.5000000
    k(    2) = (  -0.5000000  -0.5000000   0.5000000), wk =   0.5000000

 Dense  grid:   115339 G-vectors     FFT dimensions: (  72,  72,  72)

 Estimated max dynamical RAM per process >       3.00 MB

 Estimated total dynamical RAM >      74.95 MB
 Generating pointlists ...
 new r_m :   0.2063 (alat units)  1.4644 (a.u.) for type    1
 new r_m :   0.2063 (alat units)  1.4644 (a.u.) for type    2

 The potential is recalculated from file :
 ./out/LiH.save/charge-density

 Starting wfcs are    2 randomized atomic wfcs +    4 random wfcs

 Band Structure Calculation
 Davidson diagonalization with overlap

 ethr =  2.50E-12,  avg # of iterations = 15.0

 total cpu time spent up to now is        0.7 secs

 End of band structure calculation

------ SPIN UP ------------

      k = 0.0000 0.0000 0.0000 ( 14331 PWs)   bands (ev):

-41.6775 -4.5920 20.5556 21.1788 21.1788 21.1788

      k =-0.5000-0.5000 0.5000 ( 14398 PWs)   bands (ev):

-41.5706 -1.6935 6.8351 17.0114 19.5132 19.5132

------ SPIN DOWN ----------

      k = 0.0000 0.0000 0.0000 ( 14331 PWs)   bands (ev):

-41.6775 -4.5920 20.5556 21.1788 21.1788 21.1788

      k =-0.5000-0.5000 0.5000 ( 14398 PWs)   bands (ev):

-41.5706 -1.6935 6.8351 17.0114 19.5132 19.5132

 the spin up/dw Fermi energies are     5.4783    5.4783 ev

 Writing output data file LiH.save/

 init_run     :      0.18s CPU      0.19s WALL (       1 calls)
 electrons    :      0.30s CPU      0.32s WALL (       1 calls)

 Called by init_run:
 wfcinit      :      0.00s CPU      0.00s WALL (       1 calls)
 potinit      :      0.03s CPU      0.03s WALL (       1 calls)
 hinit0       :      0.10s CPU      0.11s WALL (       1 calls)

 Called by electrons:
 c_bands      :      0.30s CPU      0.32s WALL (       1 calls)
 v_of_rho     :      0.02s CPU      0.02s WALL (       1 calls)

 Called by c_bands:
 cegterg      :      0.27s CPU      0.28s WALL (       4 calls)

 Called by sum_band:

 Called by *egterg:
 h_psi        :      0.24s CPU      0.24s WALL (      68 calls)
 g_psi        :      0.00s CPU      0.00s WALL (      60 calls)
 cdiaghg      :      0.02s CPU      0.02s WALL (      64 calls)

 Called by h_psi:
 h_psi:pot    :      0.24s CPU      0.24s WALL (      68 calls)
 vloc_psi     :      0.24s CPU      0.24s WALL (      68 calls)

 General routines
 fft          :      0.02s CPU      0.02s WALL (       4 calls)
 fftw         :      0.22s CPU      0.23s WALL (     592 calls)
 davcio       :      0.00s CPU      0.01s WALL (       8 calls)

 Parallel routines
 fft_scatt_xy :      0.02s CPU      0.02s WALL (     596 calls)
 fft_scatt_yz :      0.16s CPU      0.16s WALL (     596 calls)

 PWSCF        :      0.70s CPU      0.80s WALL

This run was terminated on: 18:55:18 14Feb2020

=------------------------------------------------------------------------------= JOB DONE. =------------------------------------------------------------------------------=

LiH-pw2x.in

&inputpp outdir='./out' prefix='LiH' write_psir=.false. /

xuzpgroup commented 4 years ago

Dear Ye Lou,

I have solved the issue.

I found pw.x and pw2qmcpack.x should be compiled by the same settings. The pw.x is I used before is complied by others although the version is same.

Thank you very much.

Best, Justin

ye-luo commented 4 years ago

Good to hear that your problem is solved. We are making effort to add the converter to pw.x and there will be no more a separate pw2qmcpack step. Thus your issue will never happen in future versions.

xuzpgroup commented 4 years ago

Good news. Looking forward to use the new version.