libAtoms / QUIP

libAtoms/QUIP molecular dynamics framework: https://libatoms.github.io
346 stars 122 forks source link

Issue with number of sparse points - tungsten data set and potential from data repo #132

Closed joshgabriel closed 5 years ago

joshgabriel commented 5 years ago

I am trying to understand the effect of parameters in the teach_sparse command by using the tungsten data set as a starting example with the following aims:

  1. Reproduce the potential on the data repository with the training data from the data repository (2000 unit cells of bulk tungsten) and with equivalent command line options to teach_sparse, because of the possible version change between 2015 and present.
  2. Run the potential from the data repository with the quip command to compare results.

For both 1. and 2. I get the "LA_Matrix_Factorise" error returning a postive number, 14, in linearalgebra.f95 when using 2000 sparse points (which I understand was the choice when fitting the potential in the data repository):

The below commands were tested with codes compiled on Intel MKL 2017 in a HPC platform, and the latest docker image on a ubuntu 18.04 desktop.

Hopefully, there is something wrong in my syntax translation, or something simpler. Any insights on my below attempts are greatly appreciated:

teach_sparse command from gp.xml under tungsten potentials in the command line tag of the xml file:

<command_line><![CDATA[ at_file=teach.xyz descriptor_str={soap l_max=14 n_max=14 atom_sigma=0.5 zeta=4 cutoff=5.0 cutoff_transition_width=1.0 delta=1.0 f0=0.0 covariance_type=dot_product sparse_method=kmeans n_sparseX=2000} do_sparse default_sigma={0.001 0.1 0.1} config_type_sigma={slice_sample:0.0001:0.01:0.01} e0=-9.19483512529700 sparse_jitter=1.0e-6 energy_parameter_name=energy force_parameter_name=force virial_parameter_name=virial config_type_parameter_name=config_type gp_file=gp.xml sparseX_separate_file=T rnd_seed=666]]></command_line>

My translation of the above command and tested with GAP Version = 1552559779 and libAtoms/QUIP (versions of libAtoms/QUIP tested by commit tag: April 7, 2019, May 11 2017 and the latest version as of posting this issue: )

teach_sparse_mpi e0={W:-9.19483512529700} sparse_jitter=1e-06 energy_parameter_name=energy force_parameter_name=force virial_parameter_name=virial config_type_parameter_name=config_type config_type_sigma={slice_sample:0.0001:0.01:0.01:0.0} gp_file=ReGAP_W_sparse2000_kmeans.xml at_file=GAP_W_Bulk.xyz default_sigma={0.001 0.1 0.1 0} gap={soap cutoff=5.0 cutoff_transition_width=1.0 atom_sigma=0.5 l_max=14 n_max=14 covariance_type=dot_product delta=1.0 zeta=4 n_sparse=2000 sparse_method=kmeans}

The error I get is: SYSTEM ABORT: proc=0 Traceback (most recent call last) File "linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 14

If I use the quip command on the xml potential file from the repository 'gp.xml':

quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=GAP_W_Bulk.xyz param_filename=gp.xml | grep AT | sed 's/AT//' >> DR_GAP_W.xyz

I get the same error with a different return code from dpotrf() coming from linearalgebra.f95: SYSTEM ABORT: Traceback (most recent call last) File "/opt/quip/src/libAtoms/linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 33

However, if I change the sparse points to 15, I get a successful fit, but this does not seem to be the right thing to do (only use 15 sparse points?) Since this error is coming from dpotrf() complaining that a semi-positive definite matrix is not found when it attempts the Cholesky Decomposition for the inverse, I also tried bumping the sparse_jitter also up to 1e-04, and down to 1e-08 but get the same error.

Attached: gp.xml file renamed as gp.xml.txt (does not run with current quip) and GAP_W_Bulk.xyz as GAP_W_Bulk.xyz.txt since .xml and .xyz don't seem to be supported file formats to upload here.

Thanks very much for your time in taking a look at this Joshua

GAP_W_Bulk.xyz.txt gp.xml.txt

gabor1 commented 5 years ago

Let's break this down into parts. It's best to focus on the quip command first, using the repository XML file. Can you please download and run the docker and try the command in there, post here what happens.

-- Gábor

On 1 Jul 2019, at 19:00, joshgabriel notifications@github.com wrote:

I am trying to understand the effect of parameters in the teach_sparse command by using the tungsten data set as a starting example with the following aims:

Reproduce the potential on the data repository with the training data from the data repository (2000 unit cells of bulk tungsten) and with equivalent command line options to teach_sparse, because of the possible version change between 2015 and present. Run the potential from the data repository with the quip command to compare results. For both 1. and 2. I get the "LA_Matrix_Factorise" error returning a postive number, 14, in linearalgebra.f95 when using 2000 sparse points (which I understand was the choice when fitting the potential in the data repository):

The below commands were tested with codes compiled on Intel MKL 2017 in a HPC platform, and the latest docker image on a ubuntu 18.04 desktop.

Hopefully, there is something wrong in my syntax translation, or something simpler. Any insights on my below attempts are greatly appreciated:

teach_sparse command from gp.xml under tungsten potentials in the command line tag of the xml file:

My translation of the above command and tested with GAP Version = 1552559779 and libAtoms/QUIP (versions of libAtoms/QUIP tested by commit tag: April 7, 2019, May 11 2017 and the latest version as of posting this issue: )

teach_sparse_mpi e0={W:-9.19483512529700} sparse_jitter=1e-06 energy_parameter_name=energy force_parameter_name=force virial_parameter_name=virial config_type_parameter_name=config_type config_type_sigma={slice_sample:0.0001:0.01:0.01:0.0} gp_file=ReGAP_W_sparse2000_kmeans.xml at_file=GAP_W_Bulk.xyz default_sigma={0.001 0.1 0.1 0} gap={soap cutoff=5.0 cutoff_transition_width=1.0 atom_sigma=0.5 l_max=14 n_max=14 covariance_type=dot_product delta=1.0 zeta=4 n_sparse=2000 sparse_method=kmeans}

The error I get is: SYSTEM ABORT: proc=0 Traceback (most recent call last) File "linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 14

If I use the quip command on the xml potential file from the repository 'gp.xml':

quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=GAP_W_Bulk.xyz param_filename=gp.xml | grep AT | sed 's/AT//' >> DR_GAP_W.xyz

I get the same error with a different return code from dpotrf() coming from linearalgebra.f95: SYSTEM ABORT: Traceback (most recent call last) File "/opt/quip/src/libAtoms/linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 33

However, if I change the sparse points to 15, I get a successful fit, but this does not seem to be the right thing to do (only use 15 sparse points?) Since this error is coming from dpotrf() complaining that a semi-positive definite matrix is not found when it attempts the Cholesky Decomposition for the inverse, I also tried bumping the sparse_jitter also up to 1e-04, and down to 1e-08 but get the same error.

Attached: gp.xml file renamed as gp.xml.txt (does not run with current quip) and GAP_W_Bulk.xyz as GAP_W_Bulk.xyz.txt since .xml and .xyz don't seem to be supported file formats to upload here.

Thanks very much for your time in taking a look at this Joshua

GAP_W_Bulk.xyz.txt gp.xml.txt

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

joshgabriel commented 5 years ago

Thanks, I just downloaded the latest docker image with the command: $ docker pull libatomsquip/quip ran it with: $ docker run -ti -v /home/joshua/Work/DockerHome/:/home libatomsquip/quip:latest /bin/bash in /home I had the gp.xml and training data xyz file attached.

I ran the quip command with quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=GAP_W_Bulk.xyz param_filename=gp.xml | grep AT | sed 's/AT//' >> DR_GAP_W.xyz

got the error: SYSTEM ABORT: Traceback (most recent call last) File "/opt/quip/src/libAtoms/linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 33

gabor1 commented 5 years ago

I just did the same, and it ran fine. I ran it on the first 10 configurations. Can you (i) place your xml and xyz files where I can look at them (ii) isolate which configuration is your command choking on.

Gabor

docker:tmp$ quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=/tmp/tmp10.xyz param_filename=/opt/share/potentials/GAP/Tungsten/gp.xml libAtoms::Hello World: 02/07/2019 15:01:59 libAtoms::Hello World: git version https://github.com/libAtoms/QUIP.git,882dddd libAtoms::Hello World: QUIP_ARCH linux_x86_64_gfortran libAtoms::Hello World: compiled on Jun 4 2019 at 09:44:25 libAtoms::Hello World: Random Seed = 54119002 libAtoms::Hello World: global verbosity = 0

Calls to system_timer will do nothing by default

Using calc args: local_gap_variance Using pre-relax calc args: Using param_filename: /opt/share/potentials/GAP/Tungsten/gp.xml Using init args: WARNING: Potential_initialise using default init_args "Potential xml_label=GAP_2013_6_24_60_12_58_8_327" WARNING: gpCoordinates_startElement_handler: covariance type is dot product, but no zeta attribute is present. This may mean an XML generated by an older version. If found, the single value from the theta element will be used, to ensure backwards compatibility WARNING: gpCoordinates_endElement_handler: dot product covariance is used, but found a theta element in the XML. This may be a sign of an XML generated by an older version. The first and only element of theta will be used as zeta. Potential containing potential IP : 6 IPModel_GAP : Gaussian Approximation Potential IPModel_GAP : label = GAP_2013_6_24_60_12_58_8_327 IPModel_GAP : cutoff = 5.0000000000000000 IPModel_GAP : E_scale = 1.0000000000000000 IPModel_GAP : command_line = at_file=teach.xyz descriptor_str={soap l_max=14 n_max=14 atom_sigma=0.5 zeta=4 cutoff=5.0 cutoff_transition_width=1.0 delta=1.0 f0=0.0 covariance_type=dot_product sparse_method=kmeans n_sparseX=10000} do_sparse default_sigma={0.001 0.1 0.1} config_type_sigma={slice_sample:0.0001:0.01:0.01} e0=-9.19483512529700 sparse_jitter=1.0e-6 energy_parameter_name=energy force_parameter_name=force virial_parameter_name=virial config_type_parameter_name=config_type gp_file=gp.xml sparseX_separate_file=T rnd_seed=666 Energy=-11.194877858607239 Cell Volume: 16.086308963923877 A^3 AT 1 AT config_type=slice_sample energy=-11.194877858607239 virial="-0.00570398 -0.00000000 -0.00000000 -0.00000000 -0.00570398 -0.00000000 -0.00000000 -0.00000000 -0.00570398" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.18050050 0.00000000 0.00000000 0.00000000 3.18050050 0.00000000 1.59025024 1.59025024 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 -0.00000000 0.00000000 74 0 0 0 50 -11.19487786 0.00000100 -0.00000000 0.00000000 0.00000000 Energy=-11.126959110115362 Cell Volume: 16.860330551521390 A^3 AT 1 AT config_type=slice_sample energy=-11.126959110115362 virial="-2.17303336 -0.00000000 0.58154873 -0.00000000 -0.84500699 -0.00000000 0.58154873 -0.00000000 -0.87624065" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.33353598 0.00000000 0.00000000 0.00000000 3.18050050 0.00000000 1.59025024 1.59025024 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 -0.00000000 0.00000000 74 0 0 0 46 -11.12695911 0.00000105 -0.00000000 -0.00000000 0.00000000 Energy=-11.133296917673189 Cell Volume: 16.860330551521390 A^3 AT 1 AT config_type=slice_sample energy=-11.133296917673189 virial="-2.19171628 0.20100726 0.37614639 0.20100726 -0.87306632 0.02824513 0.37614639 0.02824513 -0.88275304" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.33353598 0.00000000 0.00000000 -0.05319854 3.18050050 0.00000000 1.59025024 1.59025024 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000 -0.00000000 74 0 0 0 46 -11.13329692 0.00000106 -0.00000000 -0.00000000 0.00000000 Energy=-11.139638438873984 Cell Volume: 16.576181549112764 A^3 AT 1 AT config_type=slice_sample energy=-11.139638438873984 virial="-1.85927057 0.17991947 0.38655942 0.17991947 -0.01941859 -0.17975547 0.38655942 -0.17975547 -0.57504660" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.33353598 0.00000000 0.00000000 -0.05319854 3.12689917 0.00000000 1.59025024 1.59025024 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 -0.00000000 0.00000000 74 0 0 0 48 -11.13963844 0.00000103 0.00000000 -0.00000000 -0.00000000 Energy=-11.141972187805262 Cell Volume: 16.576181549112764 A^3 AT 1 AT config_type=slice_sample energy=-11.141972187805262 virial="-1.86904080 0.21488208 -0.29845169 0.21488208 -0.02879041 -0.22668403 -0.29845169 -0.22668403 -0.57987520" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.33353598 0.00000000 0.00000000 -0.05319854 3.12689917 0.00000000 1.67542193 1.59025024 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000 0.00000000 74 0 0 0 50 -11.14197219 0.00000103 -0.00000000 0.00000000 -0.00000000 Energy=-11.138816659041364 Cell Volume: 16.576181549112764 A^3 AT 1 AT config_type=slice_sample energy=-11.138816659041364 virial="-1.85427404 0.22564813 -0.30972461 0.22564813 -0.02793345 -0.35836731 -0.30972461 -0.35836731 -0.57096262" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.33353598 0.00000000 0.00000000 -0.05319854 3.12689917 0.00000000 1.67542193 1.60740426 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 74 0 0 0 50 -11.13881666 0.00000103 0.00000000 0.00000000 0.00000000 Energy=-11.123690707695342 Cell Volume: 16.841145054908743 A^3 AT 1 AT config_type=slice_sample energy=-11.123690707695342 virial="-2.15427510 0.21010016 -0.30281191 0.21010016 -0.31344827 -0.34634523 -0.30281191 -0.34634523 -1.33020292" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.33353598 0.00000000 0.00000000 -0.05319854 3.12689917 0.00000000 1.67542193 1.60740426 1.61566974" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 74 0 0 0 48 -11.12369071 0.00000106 -0.00000000 -0.00000000 0.00000000 Energy=-11.150444386868397 Cell Volume: 16.278741661094653 A^3 AT 1 AT config_type=slice_sample energy=-11.150444386868397 virial="-0.59389247 0.28138447 -0.85430547 0.28138447 0.40692585 -0.44612230 -0.85430547 -0.44612230 -0.61814460" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.22221386 0.00000000 0.00000000 -0.05319854 3.12689917 0.00000000 1.67542193 1.60740426 1.61566974" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 -0.00000000 74 0 0 0 52 -11.15044439 0.00000102 0.00000000 0.00000000 0.00000000 Energy=-11.047669103007816 Cell Volume: 16.278741661094653 A^3 AT 1 AT config_type=slice_sample energy=-11.047669103007816 virial="-0.49358409 1.05574458 -1.72026404 1.05574458 0.66083127 -0.84518180 -1.72026404 -0.84518180 -0.22428210" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.22221386 0.00000000 0.00000000 -0.21881412 3.12689917 0.00000000 1.67542193 1.60740426 1.61566974" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 74 0 0 0 46 -11.04766910 0.00000123 -0.00000000 -0.00000000 0.00000000 Energy=-11.062150287588505 Cell Volume: 16.514163092898766 A^3 AT 1 AT config_type=slice_sample energy=-11.062150287588505 virial="-0.79477109 0.98219337 -1.58688327 0.98219337 -0.06359375 -0.59673357 -1.58688327 -0.59673357 -0.57670100" cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.22221386 0.00000000 0.00000000 -0.21881412 3.17212006 0.00000000 1.67542193 1.60740426 1.61566974" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000 0.00000000 74 0 0 0 46 -11.06215029 0.00000114 0.00000000 -0.00000000 -0.00000000

libAtoms::Finalise: 02/07/2019 15:04:44 libAtoms::Finalise: Bye-Bye! docker:tmp$ docker:tmp$

-- Gábor

Gábor Csányi Professor of Molecular Modelling Engineering Laboratory Pembroke College University of Cambridge

Pembroke College supports CARA. A Lifeline to Academics at Risk. http://www.cara.ngo/

On 1 Jul 2019, at 20:21, joshgabriel notifications@github.com wrote:

Thanks, I just downloaded the latest docker image with the command: $ docker pull libatomsquip/quip ran it with: $ docker run -ti -v /home/joshua/Work/DockerHome/:/home libatomsquip/quip:latest /bin/bash in /home I had the gp.xml and training data xyz file attached.

I ran the quip command with quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=GAP_W_Bulk.xyz param_filename=gp.xml | grep AT | sed 's/AT//' >> DR_GAP_W.xyz

got the error: SYSTEM ABORT: Traceback (most recent call last) File "/opt/quip/src/libAtoms/linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 33

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

joshgabriel commented 5 years ago

(i) original xyz file and xml file GAP_W_Bulk.xyz.txt gp.xml.txt

Working on (ii)

Thanks for your time in working on this with me. It didn't resolve for me with the first 10 configurations I selected: My next step is to try on the xyz deck that is the output in the previous comment or if you could share the tmp10.xyz file that will help too?

I used the potential in the /opt/share/ folder as in the previous comment and tested on the training set of

  1. first 10 configurations I could parse out. tmpFirst10_joshua.xyz.txt
  2. the first configuration $ cat Case0.xyz 1 Lattice="3.180500495732849 0.0 0.0 0.0 3.180500495732849 0.0 1.590250242866424 1.590250242866424 1.590250242866424" Properties=species:S:1:pos:R:3:force:R:3 virial="-0.005703978934983333 -0.0 -0.0 -0.0 -0.005703978934983293 -0.0 -0.0 -0.0 -0.005703978934983333" config_type=slice_sample energy=-11.194835125297 pbc="T T T" W 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

Unfortunately, encountering the same error, but this time with some of the initial output in the previous comment on the version difference with the xml file (previously I got the LA_Matrix_Factorise error straight away).

Here is output when running quip on the first configuration

$ docker run -ti -v /home/joshua/Work/DockerHome/:/home libatomsquip/quip:latest /bin/bash

docker:/$ cd /home/LibGAP_W/ docker:LibGAP_W$ quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=Case0.xyz param_filename=/opt/share/potentials/GAP/Tungsten/gp.xml libAtoms::Hello World: 02/07/2019 15:51:49 libAtoms::Hello World: git version https://github.com/libAtoms/QUIP.git,882dddd libAtoms::Hello World: QUIP_ARCH linux_x86_64_gfortran libAtoms::Hello World: compiled on Jun 4 2019 at 09:44:25 libAtoms::Hello World: Random Seed = 57109588 libAtoms::Hello World: global verbosity = 0

Calls to system_timer will do nothing by default

Using calc args: local_gap_variance Using pre-relax calc args: Using param_filename: /opt/share/potentials/GAP/Tungsten/gp.xml Using init args: WARNING: Potential_initialise using default init_args "Potential xml_label=GAP_2013_6_24_60_12_58_8_327" WARNING: gpCoordinates_startElement_handler: covariance type is dot product, but no zeta attribute is present. This may mean an XML generated by an older version. If found, the single value from the theta element will be used, to ensure backwards compatibility WARNING: gpCoordinates_endElement_handler: dot product covariance is used, but found a theta element in the XML. This may be a sign of an XML generated by an older version. The first and only element of theta will be used as zeta. SYSTEM ABORT: Traceback (most recent call last) File "/opt/quip/src/libAtoms/linearalgebra.f95", line 2309 kind unspecified LA_Matrix_Factorise: cannot factorise, error: 33

joshgabriel commented 5 years ago

Additionally if this helps:

docker:LibGAP_W$ cat /opt/quip/src/GAP/GAP_VERSION 1559311901

And checking docker image version:

$ sudo docker pull libatomsquip/quip:latest

latest: Pulling from libatomsquip/quip Digest: sha256:b597a346f67b7abc00fb821d5b550bfe0271fb4560079dcf06005e488b29cf07 Status: Image is up to date for libatomsquip/quip:latest

gabor1 commented 5 years ago

That's nuts. It is having trouble initializing the potential. But it's the same docker image!!

I wonder if it's possible to share an already running docker container.

-- Gábor

On 2 Jul 2019, at 18:18, joshgabriel notifications@github.com wrote:

Additionally if this helps:

docker:LibGAP_W$ cat /opt/quip/src/GAP/GAP_VERSION 1559311901

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

joshgabriel commented 5 years ago

There may be a way via Digital Ocean's droplet. It would involve setting up a cloud server on Digital Ocean. I understand this is used for web-app DevOps that can be deployed from containers and it would involve installing GAP and libAtoms from scratch just like on any Linux workstation.

joshgabriel commented 5 years ago

I just checked out some other docker images from the libatomsquip repo on dockerhub. I found libatomsquip/quip-gap-py3 to work for the first configuration using quip. I get a perfect output at least for quip now! I used the same gp.xml that was referenced in the previous docker, libatomsquip/quip:latest . libatomsquip/quip-gap-py3 did not have the potential in /opt/share/:

docker:LibGAP_W$ quip calc_args=local_gap_variance L=T E=T F=T atoms_filename=Case0.xyz param_filename=./Tungsten/gp.xml libAtoms::Hello World: 02/07/2019 19:00:07 libAtoms::Hello World: git version https://github.com/libAtoms/QUIP.git,05aa55b-dirty libAtoms::Hello World: QUIP_ARCH linux_x86_64_gfortran_openmp libAtoms::Hello World: compiled on Jun 9 2019 at 16:04:48 libAtoms::Hello World: OpenMP parallelisation with 36 threads WARNING: libAtoms::Hello World: environment variable OMP_STACKSIZE not set explicitly. The default value - system and compiler dependent - may be too small for some applications. libAtoms::Hello World: Random Seed = 68407452 libAtoms::Hello World: global verbosity = 0

Calls to system_timer will do nothing by default

Using calc args: local_gap_variance Using pre-relax calc args: Using param_filename: ./Tungsten/gp.xml Using init args: WARNING: Potential_initialise using default init_args "Potential xml_label=GAP_2013_6_24_60_12_58_8_327" WARNING: gpCoordinates_startElement_handler: covariance type is dot product, but no zeta attribute is present. This may mean an XML generated by an older version. If found, the single value from the theta element will be used, to ensure backwards compatibility WARNING: gpCoordinates_endElement_handler: dot product covariance is used, but found a theta element in the XML. This may be a sign of an XML generated by an older version. The first and only element of theta will be used as zeta. Potential containing potential IP : 6 IPModel_GAP : Gaussian Approximation Potential IPModel_GAP : label = GAP_2013_6_24_60_12_58_8_327 IPModel_GAP : cutoff = 5.0000000000000000 IPModel_GAP : E_scale = 1.0000000000000000 IPModel_GAP : command_line = at_file=teach.xyz descriptor_str={soap l_max=14 n_max=14 atom_sigma=0.5 zeta=4 cutoff=5.0 cutoff_transition_width=1.0 delta=1.0 f0=0.0 covariance_type=dot_product sparse_method=kmeans n_sparseX=10000} do_sparse default_sigma={0.001 0.1 0.1} config_type_sigma={slice_sample:0.0001:0.01:0.01} e0=-9.19483512529700 sparse_jitter=1.0e-6 energy_parameter_name=energy force_parameter_name=force virial_parameter_name=virial config_type_parameter_name=config_type gp_file=gp.xml sparseX_separate_file=T rnd_seed=666 Energy=-11.194877858536355 Cell Volume: 16.086308963923877 A^3 AT 1 AT virial="-0.00570398 -0.00000000 -0.00000000 -0.00000000 -0.00570398 -0.00000000 -0.00000000 -0.00000000 -0.00570398" config_type=slice_sample energy=-11.194877858536355 cutoff=5.00000000 nneightol=1.20000000 pbc="T T T" Lattice="3.18050050 0.00000000 0.00000000 0.00000000 3.18050050 0.00000000 1.59025024 1.59025024 1.59025024" Properties=species:S:1:pos:R:3:force:R:3:Z:I:1:map_shift:I:3:n_neighb:I:1:local_energy:R:1:local_gap_variance:R:1:gap_variance_gradient:R:3 AT W 0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000 0.00000000 74 0 0 0 50 -11.19487786 0.00000100 -0.00000000 0.00000000 -0.00000000

libAtoms::Finalise: 02/07/2019 19:01:57 libAtoms::Finalise: Bye-Bye!

I found that the teach_sparse command is replaced with gap_fit. However, I do not find gap_fit or teach_sparse in /opt/quip/bin. Has the command changed to something else?

Thanks for your time, Josh

joshgabriel commented 5 years ago

I also confirm that the quip on the docker version libatomsquip/quip-gap-py3 runs successfully for the entire training set.

I also found that this docker image corresponds to GAP Version 1560003798. It only seems to be missing the corresponding gap_fit command or is there some documentation I am missing on how to call the gap_fit command in place of teach_sparse which was used in previous versions?

Thanks Josh

joshgabriel commented 5 years ago

I dug around in the new docker and found the gap_fit program in /opt/quip/build/linux_x86_64_gfortran/gap_fit It just did not get copied to /opt/quip/bin when performing make install in QUIP_ROOT. I was able to re-fit the tungsten potential successfully with the 2000 sparse points as was reported. Yay!

I think you can close the issue. Unless waiting on one final test which I am doing now: comparing the output of quip when running the original gp.xml and the newly fit potential on the same training data of 2000 tungsten unit cells.

Thanks Josh

joshgabriel commented 5 years ago

Running each potential, I obtain the following statistical differences, which I think are acceptable (max difference is less than 1 meV/atom):

Potential Energy (as parsed out with python ASE) : RMSE= 7.7e-02 meV/atom Min Absolute Difference= 6e-07 meV/atom Max Absolute Difference= 0.7 meV/atom Mean Absolute Difference= 5e-02 meV/atom with Std.Dev. of 6e-02 meV/atom

The forces were 0 in the training set, and are all 0 in the set produced by quip for both my re-fit potential and the potential from the repository.

My conclusion is that between GAP version 1559311901 and version 1560003798, the latter reproduces the tungsten potential fit and could be the best to work with going forward.

If this sounds right, I think we can close the issue.

Thanks for your time Josh

gabor1 commented 5 years ago

Thanks for all this digging.

I’m a bit concerned that the agreement is only just less than a meV. Is this between old and new potential, or just running the old potential with old and new code? If the former, then it’s fine.

You caught us during a docker update process, the new docker images you found indeed are the ones to go forward with, we are planning to make them the default “quip” docker on Friday (the py2, with the py3 following on during the summer once the descriptor interfaces have been updated to py3).

I’m afraid we will now never know how it is possible to get different answers for running the same docker image… I can’t see us spending time on it.

-- Gábor

Gábor Csányi Professor of Molecular Modelling Engineering Laboratory Pembroke College University of Cambridge

Pembroke College supports CARA. A Lifeline to Academics at Risk. http://www.cara.ngo/

On 3 Jul 2019, at 00:13, joshgabriel notifications@github.com wrote:

Running each potential, I obtain the following statistical differences, which I think are acceptable (max difference is less than 1 meV/atom):

Potential Energy (as parsed out with python ASE) : RMSE= 7.7e-02 meV/atom Min Absolute Difference= 6e-07 meV/atom Max Absolute Difference= 0.7 meV/atom Mean Absolute Difference= 5e-02 meV/atom with Std.Dev. of 6e-02 meV/atom

The forces were 0 in the training set, and are all 0 in the set produced by quip for both my re-fit potential and the potential from the repository.

My conclusion is that between GAP version 1559311901 and version 1560003798, the latter reproduces the tungsten potential fit and could be the best to work with going forward.

If this sounds right, I think we can close the issue.

Thanks for your time Josh

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

gabor1 commented 5 years ago

I looked at the quip-py2 docker, and the “gap_fit” program is in /opt/quip/bin.

docker:quip$ ls -l /opt/quip/bin/gap_fit -rwxr-xr-x 1 root root 9043248 Jun 14 21:23 /opt/quip/bin/gap_fit docker:quip$

-- Gábor

Gábor Csányi Professor of Molecular Modelling Engineering Laboratory Pembroke College University of Cambridge

Pembroke College supports CARA. A Lifeline to Academics at Risk. http://www.cara.ngo/

On 2 Jul 2019, at 22:26, joshgabriel notifications@github.com wrote:

I dug around in the new docker and found the gap_fit program in /opt/quip/build/linux_x86_64_gfortran/gap_fit It just did not get copied to /opt/quip/bin when performing make install in QUIP_ROOT. I was able to re-fit the tungsten potential successfully with the 2000 sparse points as was reported. Yay!

I think you can close the issue. Unless waiting on one final test which I am doing now: comparing the output of quip when running the original gp.xml and the newly fit potential on the same training data of 2000 tungsten unit cells.

Thanks Josh

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

joshgabriel commented 5 years ago

Thanks. I will give the docker with quip-py2 a try and can post the results on the differences if that's helpful as well. Yes it is difference between potential A (fit with new code, that is docker quip-py3 with what I understand are accurate translations of the parameters) and potential B (the potential available on the data repo at libatoms.org). I translated the parameters from potential B's command_line tag in the xml file.

How low of a difference are we to expect? None would be great I am sure but perhaps the new version may be more accurate. Perhaps the best benchmark would be with respect to the training data rather than between potential A and B?

For me the issue seems solved though, in case you would like to close it. I can update the benchmarks I mentioned even if the issue is closed.

Thanks a lot for your time Josh

joshgabriel commented 5 years ago

Training data (training data file) errors for the potentials fit with

  1. docker quip-py2 All in meV/atom RMSE: 0.11 Min Absolute Error: 0.000 Max Absolute Error: 0.953 Mean Absolute Error +- Std.Dev: 0.075+-0.081

  2. docker quip-py3 All in meV/atom RMSE: 0.11 Min Absolute Error: 0.000 Max Absolute Error: 0.953 Mean Absolute Error +- Std.Dev: 0.075+-0.081

  3. data repo potential gp.xml All in meV/atom RMSE: 0.071 Min Absolute Error: 0.000 Max Absolute Error: 0.757 Mean Absolute Error +- Std.Dev: 0.051+-0.05

I think it is good to see identical errors across platforms for different docker images. And comparing the above the maximum error less than 0.1 meV/atom with respect to the potential on the data repository

P.S. Might be unrelated or can be a separate issue. I tried installing the above GAP - LibAtoms versions from the docker as a standalone on a HPC with Intel 2017 compilers in an attempt to check the results are the same. I get stuck with missing netcdf libraries at runtime that were not required before in previous successful installations I made with the intel compiler on HPC. (even if I answer No for the compile with NetCDF4 support) issue. Is this expected, that NetCDF4 is a mandatory requirement now?

Thanks Josh

gabor1 commented 5 years ago

Regarding the netcdf issue, can you please find out which object file needs netcdf? Use "nm" and grep for netcdf. It might be that it's not quip that needs it but some other external library that you are linking in. If it's us, we'll fix it.

-- Gábor

On 3 Jul 2019, at 18:33, joshgabriel notifications@github.com wrote:

Training data (training data file) errors for the potentials fit with

docker quip-py2 All in meV/atom RMSE: 0.11 Min Absolute Error: 0.000 Max Absolute Error: 0.953 Mean Absolute Error +- Std.Dev: 0.075+-0.081

docker quip-py3 All in meV/atom RMSE: 0.11 Min Absolute Error: 0.000 Max Absolute Error: 0.953 Mean Absolute Error +- Std.Dev: 0.075+-0.081

data repo potential gp.xml All in meV/atom RMSE: 0.071 Min Absolute Error: 0.000 Max Absolute Error: 0.757 Mean Absolute Error +- Std.Dev: 0.051+-0.05

I think it is good to see identical errors across platforms for different docker images. And comparing the above the maximum error less than 0.1 meV/atom with respect to the potential on the data repository

P.S. Might be unrelated or can be a separate issue. I tried installing the above GAP - LibAtoms versions from the docker as a standalone on a HPC with Intel 2017 compilers in an attempt to check the results are the same. I get stuck with missing netcdf libraries at runtime that were not required before in previous successful installations I made with the intel compiler on HPC. (even if I answer No for the compile with NetCDF4 support) issue. Is this expected, that NetCDF4 is a mandatory requirement now?

Thanks Josh

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

joshgabriel commented 5 years ago

When doing nm -A <binary> | grep netcdf on the installed quip and gap_fit binaries created in $QUIP_ROOT/bin

$ nm -A ./bin/quip | grep netcdf ./bin/quip:000000000058f560 T convert_from_netcdf_type ./bin/quip:000000000058fea0 T convert_to_netcdf_type ./bin/quip:0000000000599ab0 T query_netcdf ./bin/quip:0000000000590460 T read_netcdf ./bin/quip:0000000000592a40 T write_netcdf

$ nm -A ./bin/gap_fit | grep netcdf ./bin/gap_fit:0000000000587e80 T convert_from_netcdf_type ./bin/gap_fit:00000000005887c0 T convert_to_netcdf_type ./bin/gap_fit:00000000005923d0 T query_netcdf ./bin/gap_fit:0000000000588d80 T read_netcdf ./bin/gap_fit:000000000058b360 T write_netcdf

Get similar output for some third-party apps:

$ nm -A ./bin/DFTB_to_xml | grep netcdf ./bin/DFTB_to_xml:000000000055d1b0 T convert_from_netcdf_type ./bin/DFTB_to_xml:000000000055daf0 T convert_to_netcdf_type ./bin/DFTB_to_xml:0000000000567700 T query_netcdf ./bin/DFTB_to_xml:000000000055e0b0 T read_netcdf ./bin/DFTB_to_xml:0000000000560690 T write_netcdf

$ nm -A ./bin/cp2k_driver | grep netcdf ./bin/cp2k_driver:000000000012f000 T convert_from_netcdf_type ./bin/cp2k_driver:000000000012f940 T convert_to_netcdf_type ./bin/cp2k_driver:0000000000139550 T query_netcdf ./bin/cp2k_driver:000000000012ff00 T read_netcdf ./bin/cp2k_driver:00000000001324e0 T write_netcdf

But when I do nm -D (something to do with dynamic objects?) grep does not find netcdf.

joshgabriel commented 5 years ago

In case it may be something to do with Intel Compiler / netcdf versions, here are the modules (versions) loaded at compile as well as run time:

Currently Loaded Modulefiles: 1) intel/2017.1.132 2) intel_mpi/2017 3) PrgEnv-intel/2017.1.132 4) hdf5/1.10.2-intel2017 5) netcdf/4.6.1-intel2017

gabor1 commented 5 years ago

That doesn't tell us which object file has the netcdf in it. So don't grep the final binaries, but grep the object files in the .../build/.. directory where QUIP compiles.

-- Gábor

On 5 Jul 2019, at 16:04, joshgabriel notifications@github.com wrote:

When doing nm -A | grep netcdf on the installed quip and gap_fit binaries created in $QUIP_ROOT/bin

$ nm -A ./bin/quip | grep netcdf ./bin/quip:000000000058f560 T convert_from_netcdf_type ./bin/quip:000000000058fea0 T convert_to_netcdf_type ./bin/quip:0000000000599ab0 T query_netcdf ./bin/quip:0000000000590460 T read_netcdf ./bin/quip:0000000000592a40 T write_netcdf

$ nm -A ./bin/gap_fit | grep netcdf ./bin/gap_fit:0000000000587e80 T convert_from_netcdf_type ./bin/gap_fit:00000000005887c0 T convert_to_netcdf_type ./bin/gap_fit:00000000005923d0 T query_netcdf ./bin/gap_fit:0000000000588d80 T read_netcdf ./bin/gap_fit:000000000058b360 T write_netcdf

Get similar output for some third-party apps:

$ nm -A ./bin/DFTB_to_xml | grep netcdf ./bin/DFTB_to_xml:000000000055d1b0 T convert_from_netcdf_type ./bin/DFTB_to_xml:000000000055daf0 T convert_to_netcdf_type ./bin/DFTB_to_xml:0000000000567700 T query_netcdf ./bin/DFTB_to_xml:000000000055e0b0 T read_netcdf ./bin/DFTB_to_xml:0000000000560690 T write_netcdf

$ nm -A ./bin/cp2k_driver | grep netcdf ./bin/cp2k_driver:000000000012f000 T convert_from_netcdf_type ./bin/cp2k_driver:000000000012f940 T convert_to_netcdf_type ./bin/cp2k_driver:0000000000139550 T query_netcdf ./bin/cp2k_driver:000000000012ff00 T read_netcdf ./bin/cp2k_driver:00000000001324e0 T write_netcdf

But when I do nm -D (something to do with dynamic objects?) grep does not find netcdf.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

gabor1 commented 5 years ago

That doesn't help, I don't have the intel compilers.

-- Gábor

On 5 Jul 2019, at 16:06, joshgabriel notifications@github.com wrote:

In case it may be something to do with Intel Compiler / netcdf versions, here are the modules (versions) loaded at compile as well as run time:

Currently Loaded Modulefiles:

intel/2017.1.132 2) intel_mpi/2017 3) PrgEnv-intel/2017.1.132 4) hdf5/1.10.2-intel2017 5) netcdf/4.6.1-intel2017 — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

joshgabriel commented 5 years ago

Hope this is in the right direction:

$ nm -A ./build/linux_x86_64_ifort_icc_mpi/*.o | grep netcdf ./build/linux_x86_64_ifort_icc_mpi/CInOutput.o: U query_netcdf ./build/linux_x86_64_ifort_icc_mpi/CInOutput.o: U read_netcdf ./build/linux_x86_64_ifort_icc_mpi/CInOutput.o: U write_netcdf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_addkey ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_getn ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_queryindex ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_querykey ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_errorabort ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_error_clearstack ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000002be0 T convert_from_netcdf_type ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000a7c0 T convert_to_netcdf_type ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_pusherror ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_push_error_withinfo ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 N .debug_info_seg ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U error_h_info ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U error_h_kind ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U error_h_line ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U _intel_fast_memset ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U intel_sse2_strchr ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U __intel_sse2_strlen ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U intel_sse2_strnlen ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 r .L_2STRING.0 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000002e0 r .L_2STRING.1 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000720 r .L_2STRING.10 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000006c0 r .L_2STRING.11 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000660 r .L_2STRING.12 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001f4 r .L_2STRING.13 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 r .L_2STRING.14 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000008 r .L_2STRING.15 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000010 r .L_2STRING.16 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000018 r .L_2STRING.17 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000028 r .L_2STRING.18 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000038 r .L_2STRING.19 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000002a8 r .L_2STRING.2 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000040 r .L_2STRING.20 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000048 r .L_2STRING.21 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000114 r .L_2STRING.22 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000dc r .L_2STRING.23 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001f8 r .L_2STRING.24 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000180 r .L_2STRING.25 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000148 r .L_2STRING.26 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000060 r .L_2STRING.27 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000054 r .L_2STRING.28 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000064 r .L_2STRING.29 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000160 r .L_2STRING.3 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000070 r .L_2STRING.30 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000080 r .L_2STRING.31 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003c0 r .L_2STRING.32 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000090 r .L_2STRING.33 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003c4 r .L_2STRING.34 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000009c r .L_2STRING.35 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000a8 r .L_2STRING.36 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001b4 r .L_2STRING.37 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000c0 r .L_2STRING.38 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000b0 r .L_2STRING.39 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000022c r .L_2STRING.4 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000314 r .L_2STRING.40 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000320 r .L_2STRING.41 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000328 r .L_2STRING.42 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000033c r .L_2STRING.43 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000340 r .L_2STRING.44 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000034c r .L_2STRING.45 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000358 r .L_2STRING.46 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000360 r .L_2STRING.47 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000036c r .L_2STRING.48 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000037c r .L_2STRING.49 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000100 r .L_2STRING.5 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000394 r .L_2STRING.50 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000039c r .L_2STRING.51 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000598 r .L_2STRING.52 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000055c r .L_2STRING.53 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000528 r .L_2STRING.54 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000004f4 r .L_2STRING.55 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000004c0 r .L_2STRING.56 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000004c0 r .L_2STRING.57 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000440 r .L_2STRING.58 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003c0 r .L_2STRING.59 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000026c r .L_2STRING.6 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000340 r .L_2STRING.60 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000002c0 r .L_2STRING.61 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000240 r .L_2STRING.62 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001c0 r .L_2STRING.63 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000454 r .L_2STRING.64 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000424 r .L_2STRING.65 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003b4 r .L_2STRING.66 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003ac r .L_2STRING.67 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003cc r .L_2STRING.68 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000600 r .L_2STRING.69 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000005d0 r .L_2STRING.7 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000005a0 r .L_2STRING.70 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000540 r .L_2STRING.71 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000480 r .L_2STRING.72 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003d8 r .L_2STRING.73 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003dc r .L_2STRING.74 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003e0 r .L_2STRING.75 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003e8 r .L_2STRING.76 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003f0 r .L_2STRING.77 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003f8 r .L_2STRING.78 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000670 r .L_2STRING.79 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000600 r .L_2STRING.8 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000638 r .L_2STRING.80 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000780 r .L_2__STRING.9 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_close ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_create ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_def_dim ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_def_var ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_def_var_deflate ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_enddef ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_att_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_var1_double ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_var1_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_vara_double ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_vara_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_vara_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_att ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_dimid ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_dimlen ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_nvars ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_var ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_varid ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_open ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_att_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_att_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_vara_double ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_vara_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_vara_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_var_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_redef ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_set_default_format ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_strerror ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000add0 T query_netcdf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 T read_netcdf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000003a20 T replace_fill_values ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U sprintf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U strcasecmp ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000004120 T write_netcdf

gabor1 commented 5 years ago

So this suggests that the HAVE_NETCDF4 variable was set during the compile. can you check your …/build/linux_x86_64_ifort_icc_mpi/Makefile.inc file to see what this variable is defined as? it should be 0 if you didn’t select it.

On 5 Jul 2019, at 16:30, joshgabriel notifications@github.com wrote:

Hope this is in the right direction:

$ nm -A ./build/linux_x86_64_ifort_icc_mpi/*.o | grep netcdf ./build/linux_x86_64_ifort_icc_mpi/CInOutput.o: U query_netcdf ./build/linux_x86_64_ifort_icc_mpi/CInOutput.o: U read_netcdf ./build/linux_x86_64_ifort_icc_mpi/CInOutput.o: U write_netcdf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_addkey ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_getn ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_queryindex ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_dictionary_querykey ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_errorabort ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_error_clearstack ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000002be0 T convert_from_netcdf_type ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000a7c0 T convert_to_netcdf_type ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_pusherror ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U c_push_error_withinfo ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 N .debug_info_seg ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U error_h_info ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U error_h_kind ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U error_h_line ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U _intel_fast_memset ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U intel_sse2_strchr ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U __intel_sse2_strlen ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U intel_sse2_strnlen ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 r .L_2STRING.0 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000002e0 r .L_2STRING.1 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000720 r .L_2STRING.10 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000006c0 r .L_2STRING.11 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000660 r .L_2STRING.12 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001f4 r .L_2STRING.13 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 r .L_2STRING.14 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000008 r .L_2STRING.15 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000010 r .L_2STRING.16 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000018 r .L_2STRING.17 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000028 r .L_2STRING.18 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000038 r .L_2STRING.19 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000002a8 r .L_2STRING.2 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000040 r .L_2STRING.20 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000048 r .L_2STRING.21 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000114 r .L_2STRING.22 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000dc r .L_2STRING.23 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001f8 r .L_2STRING.24 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000180 r .L_2STRING.25 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000148 r .L_2STRING.26 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000060 r .L_2STRING.27 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000054 r .L_2STRING.28 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000064 r .L_2STRING.29 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000160 r .L_2STRING.3 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000070 r .L_2STRING.30 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000080 r .L_2STRING.31 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003c0 r .L_2STRING.32 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000090 r .L_2STRING.33 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003c4 r .L_2STRING.34 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000009c r .L_2STRING.35 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000a8 r .L_2STRING.36 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001b4 r .L_2STRING.37 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000c0 r .L_2STRING.38 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000000b0 r .L_2STRING.39 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000022c r .L_2STRING.4 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000314 r .L_2STRING.40 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000320 r .L_2STRING.41 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000328 r .L_2STRING.42 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000033c r .L_2STRING.43 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000340 r .L_2STRING.44 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000034c r .L_2STRING.45 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000358 r .L_2STRING.46 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000360 r .L_2STRING.47 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000036c r .L_2STRING.48 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000037c r .L_2STRING.49 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000100 r .L_2STRING.5 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000394 r .L_2STRING.50 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000039c r .L_2STRING.51 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000598 r .L_2STRING.52 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000055c r .L_2STRING.53 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000528 r .L_2STRING.54 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000004f4 r .L_2STRING.55 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000004c0 r .L_2STRING.56 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000004c0 r .L_2STRING.57 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000440 r .L_2STRING.58 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003c0 r .L_2STRING.59 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000026c r .L_2STRING.6 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000340 r .L_2STRING.60 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000002c0 r .L_2STRING.61 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000240 r .L_2STRING.62 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000001c0 r .L_2STRING.63 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000454 r .L_2STRING.64 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000424 r .L_2STRING.65 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003b4 r .L_2STRING.66 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003ac r .L_2STRING.67 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003cc r .L_2STRING.68 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000600 r .L_2STRING.69 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000005d0 r .L_2STRING.7 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000005a0 r .L_2STRING.70 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000540 r .L_2STRING.71 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000480 r .L_2STRING.72 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003d8 r .L_2STRING.73 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003dc r .L_2STRING.74 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003e0 r .L_2STRING.75 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003e8 r .L_2STRING.76 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003f0 r .L_2STRING.77 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:00000000000003f8 r .L_2STRING.78 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000670 r .L_2STRING.79 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000600 r .L_2STRING.8 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000638 r .L_2STRING.80 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000780 r .L_2__STRING.9 ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_close ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_create ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_def_dim ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_def_var ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_def_var_deflate ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_enddef ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_att_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_var1_double ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_var1_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_vara_double ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_vara_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_get_vara_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_att ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_dimid ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_dimlen ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_nvars ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_var ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_inq_varid ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_open ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_att_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_att_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_vara_double ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_vara_int ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_vara_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_put_var_text ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_redef ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_set_default_format ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U nc_strerror ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:000000000000add0 T query_netcdf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000000000 T read_netcdf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000003a20 T replace_fill_values ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U sprintf ./build/linux_x86_64_ifort_icc_mpi/netcdf.o: U strcasecmp ./build/linux_x86_64_ifort_icc_mpi/netcdf.o:0000000000004120 T write_netcdf

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

-- Gábor

Gábor Csányi Professor of Molecular Modelling Engineering Laboratory Pembroke College University of Cambridge

Pembroke College supports CARA. A Lifeline to Academics at Risk. http://www.cara.ngo/

joshgabriel commented 5 years ago

So does that mean I shouldn't select netcdf for the intel compilers on HPC?

this is what is in my Makefile.inc . I based my user choices for what I saw was selected in /opt/quip/build/linux_x86_64_gfortran/Makefile.inc of the docker quip-py2; I saw that netcdf was selected to 1 in the docker, so chose it here. The linked paths -l are the paths suggested by nc-config on the HPC.

---- Begin Makefile.inc --- MATH_LINKOPTS=-mkl EXTRA_LINKOPTS= USE_MAKEDEP=0 HAVE_CP2K=1 HAVE_VASP=1 HAVE_TB=1 HAVE_PRECON=1 HAVE_LOTF=1 HAVE_ONIOM=1 HAVE_LOCAL_E_MIX=1 HAVE_QC=1 HAVE_GAP=1 HAVE_QR=1 HAVE_THIRDPARTY=0 HAVE_FX=0 HAVE_SCME=0 HAVE_MTP=0 HAVE_MBD=0 HAVE_TTM_NF=0 HAVE_CH4=0 NETCDF4_LIBS=-L/usr/local/apps/netcdf-centos7/4.6.1-intel2017/lib -lnetcdf NETCDF4_FLAGS=-I/usr/local/apps/netcdf-centos7/4.6.1-intel2017/include HAVE_NETCDF4=1 HAVE_MDCORE=0 HAVE_ASAP=0 HAVE_CGAL=0 HAVE_METIS=0 HAVE_LMTO_TBE=0 SIZEOF_FORTRAN_T=2 --- End of Makefile.inc ---

My HPC doesn't support Singularity yet unfortunately and so I am trying to compile with the options selected in the Docker.

gabor1 commented 5 years ago

I’m confused. I thought you WANTED to compile without netCDF support, because your HPC doesn’t have it. But this Makefile.inc was configured to have netcdf. You must have answered “y” to the question when you went through the ‘make config’.

If you edit the Makefile.inc and put “0” for the HAVE_NETCDF4 variable, then make clean, and make again, it shouldn’t compile in calls for netcdf.

-- Gábor

Gábor Csányi Professor of Molecular Modelling Engineering Laboratory Pembroke College University of Cambridge

Pembroke College supports CARA. A Lifeline to Academics at Risk. http://www.cara.ngo/

On 5 Jul 2019, at 20:21, joshgabriel notifications@github.com wrote:

So does that mean I shouldn't select netcdf for the intel compilers on HPC?

this is what is in my Makefile.inc . I based my user choices for what I saw was selected in /opt/quip/build/linux_x86_64_gfortran/Makefile.inc of the docker quip-py2; I saw that netcdf was selected to 1 in the docker, so chose it here. The linked paths -l are the paths suggested by nc-config on the HPC.

---- Begin Makefile.inc ---

Place to override setting elsewhere, in particular things set in Makefile.linux_x86_64_ifort_icc_mpi

look in /gpfs_backup/patala_data/Joshua/QUIP_Docker/arch/Makefile.linux_x86_64_ifort_icc_mpi for defaults set by arch

F77=mpif77

F90=mpif90

F95=mpif90

CC=mpicc

CPLUSPLUS=mpiCC

FPP=ifort -E

LINKER=mpif90

LIBTOOL=

OPTIM=

COPTIM=

DEBUG=-g -traceback -check bounds -DDUMP_CORE_ON_ABORT

DEBUG=-g -traceback -DDUMP_CORE_ON_ABORT

CDEBUG=

MATH_LINKOPTS=-mkl EXTRA_LINKOPTS= USE_MAKEDEP=0 HAVE_CP2K=1 HAVE_VASP=1 HAVE_TB=1 HAVE_PRECON=1 HAVE_LOTF=1 HAVE_ONIOM=1 HAVE_LOCAL_E_MIX=1 HAVE_QC=1 HAVE_GAP=1 HAVE_QR=1 HAVE_THIRDPARTY=0 HAVE_FX=0 HAVE_SCME=0 HAVE_MTP=0 HAVE_MBD=0 HAVE_TTM_NF=0 HAVE_CH4=0 NETCDF4_LIBS=-L/usr/local/apps/netcdf-centos7/4.6.1-intel2017/lib -lnetcdf NETCDF4_FLAGS=-I/usr/local/apps/netcdf-centos7/4.6.1-intel2017/include HAVE_NETCDF4=1 HAVE_MDCORE=0 HAVE_ASAP=0 HAVE_CGAL=0 HAVE_METIS=0 HAVE_LMTO_TBE=0 SIZEOF_FORTRAN_T=2 --- End of Makefile.inc ---

My HPC doesn't support Singularity yet unfortunately and so I am trying to compile with the options selected in the Docker.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.