Closed heatherkellyucl closed 5 years ago
Started checking how much of our current buildscript needs changing.
Made a preliminary build script, currently trying build on Legion to see how many workarounds are still necessary.
mpif90 -static-intel -o pw.x \
pwscf.o libpw.a ../../Modules/libqemod.a ../../FFTXlib/libqefft.a ../../LAXlib/libqela.a /dev/shm/qe-6.1//clib/clib.a /dev/shm/qe-6.1//iotk/src/libiotk.a -lfftw3xf_intel -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
ld: cannot find -lfftw3xf_intel
Our install of the Intel 2017 compiler doesn't appear to have any libfftw in /shared/ucl/apps/intel/2017.Update1/mkl/lib/intel64
(Intel 2015 does).
All the interfaces need building:
cd /shared/ucl/apps/intel/2017.Update1/mkl/interfaces/fftw3xf
make libintel64 compiler=gnu INSTALL_DIR=/shared/ucl/apps/intel/2017.Update1/mkl/lib/intel64
blas95 and lapack95 need ifort (so Intel compiler module loaded). fftw2x_cdft, fftw3x_cdft, mklmpi need Intel MPI loaded.
Put message in Intel compiler buildscript to remind the next person to run the interface buildscript after installing.
This is what needs doing, with the relevant Intel compiler module and Intel MPI loaded:
cd $MKLROOT/interfaces
for dir in */; do
cd $dir
make libintel64 compiler=intel INSTALL_DIR=$MKLROOT/lib/intel64
cd ..
done
Made intel-compilers-interfaces_install
and added reminder to intel-compilers-2017-update1_install
.
Built interfaces on:
Test build of QE finished on Legion. It doesn't have quite the same executables for this version as 5.2.0 has - I don't know if that is expected or some parts didn't build automatically.
5.2.0 version:
average.x ev.x lambda.x projwfc.x turbo_lanczos.x
band_plot.x fd_ef.x ld1.x pw2bgw.x turbo_spectrum.x
bands_FS.x fd_ifc.x manycp.x pw2gw.x unfold.x
bands.x fd.x manypw.x pw2wannier90.x upf2casino.x
bgw2pw.x fhi2upf.x matdyn.x pw4gww.x uspp2upf.x
blc2wan.x fpmd2upf.x midpoint.x pwcond.x vdb2upf.x
casino2upf.x fqha.x molecularpdos.x pw_export.x virtual.x
cmplx_bands.x gcube2plt.x ncpp2upf.x pwgui wannier90.x
conductor.x generate_rVV10_kernel_table.x neb.x pwi2xsf.x wannier_ham.x
cpmd2upf.x generate_vdW_kernel_table.x oldcp2upf.x pw.x wannier_plot.x
cppp.x gww_fit.x path_interpolation.x q2qstar.x wannier.x
cp.x gww.x pawplot.x q2r.x wfck2r.x
current.x head.x phcg.x q2trans_fd.x wfdd.x
d3.x importexport_binary.x ph.x q2trans.x wfk2etsf.x
decay.x initial_state.x plan_avg.x read_upf_tofile.x wfreq.x
disentangle.x interpolate.x plotband.x rrkj2upf.x write_ham.x
dist.x iotk_print_kinds.x plotproj.x sax2qexml.x wstat.x
dos.x iotk.x plotrho.x sumpdos.x xspectra.x
dynmat.x kgrid.x plot.x sum_sgm.x
embed.x kpoints.x pmw.x turbo_davidson.x
epsilon.x kvecs_FS.x pp.x turbo_eels.x
6.1
average.x fpmd2upf.x manypw.x projwfc.x turbo_davidson.x
bands.x fqha.x matdyn.x pw2bgw.x turbo_eels.x
bgw2pw.x fs.x molecularnexafs.x pw2gw.x turbo_lanczos.x
bse_main.x generate_rVV10_kernel_table.x molecularpdos.x pw2wannier90.x turbo_spectrum.x
casino2upf.x generate_vdW_kernel_table.x ncpp2upf.x pw4gww.x upf2casino.x
cpmd2upf.x gww_fit.x neb.x pwcond.x uspp2upf.x
cppp.x gww.x oldcp2upf.x pw_export.x vdb2upf.x
cp.x head.x path_interpolation.x pwi2xsf.x virtual.x
dist.x importexport_binary.x pawplot.x pw.x wannier_ham.x
dos.x initial_state.x phcg.x q2qstar.x wannier_plot.x
dynmat.x interpolate.x ph.x q2r.x wfck2r.x
epsilon.x iotk_print_kinds.x plan_avg.x q2trans_fd.x wfdd.x
ev.x iotk.x plotband.x q2trans.x xspectra.x
fd_ef.x kpoints.x plotproj.x read_upf_tofile.x
fd_ifc.x lambda.x plotrho.x rrkj2upf.x
fd.x ld1.x pmw.x spectra_correction.x
fhi2upf.x manycp.x pp.x sumpdos.x
May just be able to copy over the pwgui
binary as that comes precompiled.
pwgui
is a launcher script, and it needs to be able to access its other files. I think you need to copy its whole directory over. Either copy the launcher in to the main bin directory and export $PWGUI
so it can find its root directory or add it to the PATH.
Actually, make all
still doesn't make all, so I've added in the ones we had before plus gui
to see if that does everything.
Some tests failed:
All done. ERROR: only 177 out of 181 tests passed (2 unknown).
Failed tests in:
/dev/shm/quantum-espresso-build.h5og1z0a/qe-6.1/test-suite/pw_pawatom/
/dev/shm/quantum-espresso-build.h5og1z0a/qe-6.1/test-suite/pw_vdw/
Ones I could still see were:
pw_scf - scf-1.in: Unknown.
_vdw - vdw-ts.in: **FAILED**.
p1
ERROR: absolute error 1.50e-01 greater than 1.00e-01. (Test: 188.62. Benchmark: 188.77.)
e1
ERROR: absolute error 7.20e-05 greater than 1.00e-06. (Test: -44.618061. Benchmark: -44.618133.)
Tests re-running with output logged so the useful parts are visible... There is quite a lot of output so just looking in the two directories above wasn't that helpful without knowing what I am looking for.
pw_pawatom - paw-vcbfgs.in: **FAILED**.
Different sets of data extracted from benchmark and test.
More data in benchmark than in test: ef1.
p1
ERROR: absolute error 2.20e-01 greater than 1.00e-01. (Test: -0.52. Benchmark: -0.3.)
e1
ERROR: absolute error 4.00e-05 greater than 1.00e-06. (Test: -328.23191. Benchmark: -328.23187.)
n1
ERROR: absolute error 2.00e+00 greater than 1.50e+00. (Test: 4.0. Benchmark: 6.0.)
pw_vdw - vdw-ts.in: **FAILED**.
p1
ERROR: absolute error 1.50e-01 greater than 1.00e-01. (Test: 188.62. Benchmark: 188.77.)
e1
ERROR: absolute error 7.20e-05 greater than 1.00e-06. (Test: -44.618061. Benchmark: -44.618133.)
https://www.nsc.liu.se/systems/triolith/software/triolith-software-apps-espresso-6.1-build01.html says for the vdw-ts.in test to pass they had to compile Modules/tsvdw.f90 with -O0 instead of -O2. (They were also using Intel 2017, MKL and Intel MPI).
They also got the "More data in benchmark than in test: ef1." for paw-vcbfgs.in but no numerical errors for that one.
I'm going to see whether a build with FFLAGS=-fp-model strict
does any better before trying to build tsvdw alone with -O0
.
That fixed the vdw-ts.in test! paw-vcbfgs.in is exactly the same.
pw_pawatom - paw-vcbfgs.in: **FAILED**.
Different sets of data extracted from benchmark and test.
More data in benchmark than in test: ef1.
p1
ERROR: absolute error 2.20e-01 greater than 1.00e-01. (Test: -0.52. Benchmark: -0.3.)
e1
ERROR: absolute error 4.00e-05 greater than 1.00e-06. (Test: -328.23191. Benchmark: -328.23187.)
n1
ERROR: absolute error 2.00e+00 greater than 1.50e+00. (Test: 4.0. Benchmark: 6.0.)
All done. ERROR: only 178 out of 181 tests passed (2 unknown).
Failed test in:
/dev/shm/quantum-espresso-build.h5og1z0a/qe-6.1/test-suite/pw_pawatom/
This would be the problem in the pawatom test...
< Program PWSCF v.6.1 (svn rev. 13369) starts on 2Mar2017 at 23:52:57
---
> Program PWSCF v.6.1 (svn rev. 13369) starts on 5Jul2017 at 16:23:41
11c11
< Serial multi-threaded version, running on 4 processor cores
---
> Parallel version (MPI), running on 1 processors
There are some +/- 0 differences at the start, one set of intermediate results a different way round, then a bunch of differences:
262c267
< -0.000000000 2.892956709 2.892956709
---
> 0.000000000 2.892956709 2.892956709
264c269
< 2.892956709 2.892956709 -0.000000000
---
> 2.892956709 2.892956709 0.000000000
...
296c301
< ethr = 1.21E-10, avg # of iterations = 2.0
---
> ethr = 6.07E-11, avg # of iterations = 2.0
298c303
< negative rho (up, down): 4.525E-02 0.000E+00
---
> negative rho (up, down): 4.520E-02 0.000E+00
...
344c349
< the Fermi energy is 5.0417 ev
---
> the Fermi energy is 5.0414 ev
...
346,348c351,353
< ! total energy = -328.23191048 Ry
< Harris-Foulkes estimate = -328.23000653 Ry
< estimated scf accuracy < 9.8E-09 Ry
---
> ! total energy = -328.23191046 Ry
> Harris-Foulkes estimate = -328.23000637 Ry
> estimated scf accuracy < 4.9E-09 Ry
354,356c359,361
< one-electron contribution = 4.98859345 Ry
< hartree contribution = 1.21001549 Ry
< xc contribution = -32.27025092 Ry
---
> one-electron contribution = 4.98876435 Ry
> hartree contribution = 1.21013745 Ry
> xc contribution = -32.27050620 Ry
358,359c363,364
< one-center paw contrib. = -286.39473022 Ry
< smearing contrib. (-TS) = 0.00004003 Ry
---
> one-center paw contrib. = -286.39476787 Ry
> smearing contrib. (-TS) = 0.00004012 Ry
363c368
< negative rho (up, down): 4.525E-02 0.000E+00
---
> negative rho (up, down): 4.520E-02 0.000E+00
...
< Computing stress (Cartesian axis) and pressure
<
<
< negative rho (up, down): 4.528E-02 0.000E+00
< total stress (Ry/bohr**3) (kbar) P= -0.23
< -0.00000159 -0.00000000 0.00000000 -0.23 -0.00 0.00
< -0.00000000 -0.00000159 0.00000000 -0.00 -0.23 0.00
< 0.00000000 0.00000000 -0.00000159 0.00 0.00 -0.23
---
> negative rho (up, down): 4.520E-02 0.000E+00
> total stress (Ry/bohr**3) (kbar) P= -0.49
> -0.00000330 0.00000000 0.00000000 -0.49 0.00 0.00
> 0.00000000 -0.00000330 -0.00000000 0.00 -0.49 -0.00
> -0.00000000 -0.00000000 -0.00000330 -0.00 -0.00 -0.49
521c388
< bfgs converged in 3 scf cycles and 2 bfgs steps
---
> bfgs converged in 2 scf cycles and 1 bfgs steps
526c393
< Final enthalpy = -328.2319111033 Ry
---
> Final enthalpy = -328.2319104638 Ry
528,529c395,396
< new unit-cell volume = 326.58490 a.u.^3 ( 48.39489 Ang^3 )
< density = 4.98283 g/cm^3
---
> new unit-cell volume = 326.77762 a.u.^3 ( 48.42345 Ang^3 )
> density = 4.97989 g/cm^3
532,534c399,401
< -0.000000000 2.892387865 2.892387865
< 2.892387865 -0.000000000 2.892387865
< 2.892387865 2.892387865 -0.000000000
---
> 0.000000000 2.892956709 2.892956709
> 2.892956709 -0.000000000 2.892956709
> 2.892956709 2.892956709 0.000000000
754c598
< the Fermi energy is 5.0472 ev
---
> the Fermi energy is 5.0406 ev
756,758c600,602
< ! total energy = -328.23187026 Ry
< Harris-Foulkes estimate = -328.23187026 Ry
< estimated scf accuracy < 4.9E-10 Ry
---
> ! total energy = -328.23191045 Ry
> Harris-Foulkes estimate = -328.23191048 Ry
> estimated scf accuracy < 0.00000054 Ry
760c604
< total all-electron energy = -8395.996629 Ry
---
> total all-electron energy = -8395.996669 Ry
764,769c608,613
< one-electron contribution = 4.99330785 Ry
< hartree contribution = 1.20913996 Ry
< xc contribution = -32.27082136 Ry
< ewald contribution = -15.76867892 Ry
< one-center paw contrib. = -286.39485774 Ry
< smearing contrib. (-TS) = 0.00003995 Ry
---
> one-electron contribution = 4.98883670 Ry
> hartree contribution = 1.20999392 Ry
> xc contribution = -32.27043216 Ry
> ewald contribution = -15.76557831 Ry
> one-center paw contrib. = -286.39477081 Ry
> smearing contrib. (-TS) = 0.00004021 Ry
771c615
< convergence has been achieved in 6 iterations
---
> convergence has been achieved in 4 iterations
773c617
< negative rho (up, down): 4.520E-02 0.000E+00
---
> negative rho (up, down): 4.522E-02 0.000E+00
778c622
< atom 2 type 1 force = 0.00000000 -0.00000000 0.00000000
---
> atom 2 type 1 force = 0.00000000 0.00000000 0.00000000
786,790c630,634
< negative rho (up, down): 4.520E-02 0.000E+00
< total stress (Ry/bohr**3) (kbar) P= -0.30
< -0.00000205 0.00000000 0.00000000 -0.30 0.00 0.00
< 0.00000000 -0.00000205 -0.00000000 0.00 -0.30 -0.00
< -0.00000000 -0.00000000 -0.00000205 -0.00 -0.00 -0.30
---
> negative rho (up, down): 4.522E-02 0.000E+00
> total stress (Ry/bohr**3) (kbar) P= -0.52
> -0.00000356 0.00000000 0.00000000 -0.52 0.00 0.00
> 0.00000000 -0.00000356 0.00000000 0.00 -0.52 0.00
> 0.00000000 0.00000000 -0.00000356 0.00 0.00 -0.52
So the 4 and 6 that were different in the test output appear to be the number of iterations each took to converge.
I think we can ignore that test result as they aren't comparable simulations. There are different make options for running the tests - run-tests-serial
is the default. It is possible we might want to do run-tests-pw-parallel
which runs the pw tests only, in parallel. Still don't know if 4 MPI processes ought to be comparable to 4 threads for these.
Nope, if you run make run-tests-pw-parallel 2>&1 | tee pw_par_test.log
in the test-suite directory then more of the tests fail with larger absolute errors than expected, and pawatom is the same except the absolute error has increased from 2.20e-01 to 2.30e-01.
Installed on
PWgui does still need copying in, so added that to buildscript and did the copy.
pwgui requires Itcl, Itk and Iwidgets to run, though.
[cceahke@login01 ~]$ pwgui
==================================================
This is PWgui version: 6.1
--------------------------------------------------
PWgui: using the system default "tclsh" interpreter
PWGUI : /shared/ucl/apps/quantum-espresso/6.1/intel-2017/PWgui-6.1
GUIB engine : /shared/ucl/apps/quantum-espresso/6.1/intel-2017/PWgui-6.1/lib/Guib-0.6
can't find package Itk
while executing
"package require Itk "
(file "/shared/ucl/apps/quantum-espresso/6.1/intel-2017/PWgui-6.1/lib/Guib-0.6/init.tcl" line 11)
invoked from within
"source /shared/ucl/apps/quantum-espresso/6.1/intel-2017/PWgui-6.1/lib/Guib-0.6/init.tcl"
("package ifneeded Guib 0.6" script)
invoked from within
"package require Guib 0.5"
(file "/shared/ucl/apps/quantum-espresso/6.1/intel-2017/PWgui-6.1/init.tcl" line 5)
invoked from within
"source [file join $env(PWGUI) init.tcl]"
(file "/shared/ucl/apps/quantum-espresso/6.1/intel-2017/PWgui-6.1/pwgui.tcl" line 62)
Have told the requestor that the rest of it is available, but pwgui not working yet.
Next week look at itk etc.
In PWgui 6.1, '&bands' shows no options to choose "run calculation, run and configure calculation," etc in the Run tab. I am not sure what to do to correct it. Can anyone help me with this?
@Sneha112Banerjee I think you might be asking this in the wrong place. Are you a user of University College London's clusters? We didn't install the rest of the requirements for PWgui so have no experience using it.
I get that. Thank you for responding.
Going to close for now as QE is installed. If someone needs PWGUI we'll open a new issue.
IN:02403508
We have 5.2 and the newest is 6.1.
It has been updated so
make all
creates all the optional external packages as well.http://www.quantum-espresso.org/wp-content/uploads/Doc/user_guide/node10.html