Closed JFA-Mbule closed 2 years ago
What is your compile error?
The compile error is:
[jaime.antonio@sdumont14 exec]$ module load netcdf hdf5 gcc/10.2
[jaime.antonio@sdumont14 exec]$ make gcc=on CLUBB=off
make BUILDROOT=/prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/ SRCROOT=/prj/cptec/jaime.antonio/These_JFA/ESM4_Project/src/ MK_TEMPLATE=/prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/templates/gnu.mk BLD_TYPE= OPENMP= --directory=fms libFMS.a
make[1]: Entrando no diretório /prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/fms' make[1]: aviso: -jN forçado no submake: desabilitando o modo jobserver. mkdir -p build && cd build && autoreconf -i /prj/cptec/jaime.antonio/These_JFA/ESM4_Project/src//FMS cd build && /prj/cptec/jaime.antonio/These_JFA/ESM4_Project/src//FMS/configure FC="mpif90" CC="mpicc" FCFLAGS="-fcray-pointer -fdefault-real-8 -fdefault-double-8 -Waliasing -ffree-line-length-none -fno-range-check -fbacktrace -O2 -fno-expensive-optimizations" CFLAGS=" -O2" CPPFLAGS="-D__IFC -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -I/scratch/app/pnetcdf/1.10_openmpi-2.0_intel/include -I/scratch/app/hdf5/1.8_openmpi-2.0_intel/include -DINTERNAL_FILE_NML -Duse_libMPI -Duse_netCDF -DMAXFIELDMETHODS_=500 -DMAXFIELDS_=500 -Duse_netCDF -DHAVE_SCHED_GETAFFINITY -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include" FPPFLAGS=" -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include" LIBS="" checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking how to print strings... printf checking for style of include used by make... GNU checking for gcc... mpicc checking whether the C compiler works... no configure: error: in
/prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/fms/build':
configure: error: C compiler cannot create executables
See config.log' for more details make[1]: [configure] Erro 77 make[1]: Saindo do diretório
/prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/fms'
make: [fms/build/libFMS/.libs/libFMS.a] Erro 2
Please, if don't have any trouble for you, we can talk using personal emails!
/prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/fms/build/config.log
will have the specific issue that is causing the compile to fail. Usually the mpicc checking whether the C compiler works... no
error is from a missing library path, or your compiler not being named correctly.
Can you tell me what's the specific library that isn't working?!
I can send you the config.log file. And so, you can look and tell me what this library is?
if you look at the config.log, it will tell you why the compiler failed. If it's due to a library, then it will be listed there.
Your best bet is to get someone who is familiar with your system to help you with the build. That person will be able to troubleshoot any environment/library issues that you encounter.
Ok, thank you all help!
I look at the file, and I think the library is mpicc as can seem bellow:
configure:3534: mpicc -v >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3545: $? = 127
configure:3534: mpicc -V >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3545: $? = 127
configure:3534: mpicc -qversion >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3545: $? = 127
configure:3565: checking whether the C compiler works
configure:3587: mpicc -O2 -D__IFC -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -I/scratch/app/pnetcdf/1.10_openmpi-2.0_intel/include -I/scratch/app/hdf5/1.8_openmpi-2.0_intel/include -DINTERNAL_FILE_NML -Duse_libMPI -DusenetCDF -DMAXFIELDMETHODS=500 -DMAXFIELDS_=500 -Duse_netCDF -DHAVE_SCHED_GETAFFINITY -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -lhdf5 -lhdf5_fortran -lhdf5_hl -lhdf5_hl_fortran nc-config --libs
nf-config --flibs
conftest.c >&5
I'm a complete beginner and I don't know anyone who works with this model here. However, I'll follow looking for someone here who can help me with this.
So, apologies for all the inconvenience which I can make to you.
Some contents of the file are shown below.
This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake.
It was created by GFDL FMS Library configure 2021.03.0, which was generated by GNU Autoconf 2.69. Invocation command line was
$ /prj/cptec/jaime.antonio/These_JFA/ESM4_Project/src//FMS/configure FC=mpif90 CC=mpicc FCFLAGS=-fcray-pointer -fdefault-real-8 -fdefault-double-8 -Waliasing -ffree-line-length-none -fno-range-check -fbacktrace -O2 -fno-expensive-optimizations CFLAGS= -O2 CPPFLAGS=-D__IFC -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -I/scratch/app/pnetcdf/1.10_openmpi-2.0_intel/include -I/scratch/app/hdf5/1.8_openmpi-2.0_intel/include -DINTERNAL_FILE_NML -Duse_libMPI -DusenetCDF -DMAXFIELDMETHODS=500 -DMAXFIELDS_=500 -Duse_netCDF -DHAVE_SCHED_GETAFFINITY -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include FPPFLAGS= -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include LIBS=
hostname = sdumont14 uname -m = x86_64 uname -r = 3.10.0-1160.49.1.el7.x86_64 uname -s = Linux uname -v = #1 SMP Tue Nov 9 16:09:48 UTC 2021
/usr/bin/uname -p = x86_64 /bin/uname -X = unknown
/bin/arch = x86_64 /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown
PATH: /scratch/app/pnetcdf/1.10_openmpi-2.0_intel/bin PATH: /scratch/app/netcdf/4.6_openmpi-2.0_intel/bin PATH: /scratch/app/gcc/10.2/bin PATH: /scratch/app/openmpi/icc/2.0.4/bin PATH: /scratch/app/hdf5/1.8_openmpi-2.0_intel/bin PATH: /usr/local/bin PATH: /usr/bin PATH: /usr/local/sbin PATH: /usr/sbin PATH: /opt/ibutils/bin PATH: /prj/cptec/jaime.antonio/.local/bin PATH: /prj/cptec/jaime.antonio/bin
configure:2411: checking build system type
configure:2425: result: x86_64-unknown-linux-gnu
configure:2445: checking host system type
configure:2458: result: x86_64-unknown-linux-gnu
configure:2481: checking target system type
configure:2494: result: x86_64-unknown-linux-gnu
configure:2539: checking for a BSD-compatible install
configure:2607: result: /usr/bin/install -c
configure:2618: checking whether build environment is sane
configure:2673: result: yes
configure:2824: checking for a thread-safe mkdir -p
configure:2863: result: /usr/bin/mkdir -p
configure:2870: checking for gawk
configure:2886: found /usr/bin/gawk
configure:2897: result: gawk
configure:2908: checking whether make sets $(MAKE)
configure:2930: result: yes
configure:2959: checking whether make supports nested variables
configure:2976: result: yes
configure:3110: checking how to print strings
configure:3137: result: printf
configure:3170: checking for style of include used by make
configure:3198: result: GNU
configure:3269: checking for gcc
configure:3296: result: mpicc
configure:3525: checking for C compiler version
configure:3534: mpicc --version >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3545: $? = 127
configure:3534: mpicc -V >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3545: $? = 127
configure:3534: mpicc -qversion >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3545: $? = 127
configure:3565: checking whether the C compiler works
configure:3587: mpicc -O2 -D__IFC -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -I/scratch/app/pnetcdf/1.10_openmpi-2.0_intel/include -I/scratch/app/hdf5/1.8_openmpi-2.0_intel/include -DINTERNAL_FILE_NML -Duse_libMPI -DusenetCDF -DMAXFIELDMETHODS=500 -DMAXFIELDS_=500 -Duse_netCDF -DHAVE_SCHED_GETAFFINITY -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -lhdf5 -lhdf5_fortran -lhdf5_hl -lhdf5_hl_fortran nc-config --libs
nf-config --flibs
conftest.c >&5
mpicc: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
configure:3591: $? = 127
configure:3629: result: no
configure: failed program was:
| / confdefs.h /
| #define PACKAGE_NAME "GFDL FMS Library"
| #define PACKAGE_TARNAME "FMS"
| #define PACKAGE_VERSION "2021.03.0"
| #define PACKAGE_STRING "GFDL FMS Library 2021.03.0"
| #define PACKAGE_BUGREPORT "gfdl.climate.model.info@noaa.gov"
| #define PACKAGE_URL "https://www.gfdl.noaa.gov/fms"
| #define PACKAGE "FMS"
| #define VERSION "2021.03.0"
| / end confdefs.h. /
configure:3634: error: in /prj/cptec/jaime.antonio/These_JFA/ESM4_Project/exec/fms/build': configure:3636: error: C compiler cannot create executables See
config.log' for more details
ac_cv_build=x86_64-unknown-linux-gnu ac_cv_env_CC_set=set ac_cv_env_CC_value=mpicc ac_cv_env_CFLAGS_set=set ac_cv_env_CFLAGS_value=' -O2' ac_cv_env_CPPFLAGS_set=set ac_cv_env_CPPFLAGS_value='-D__IFC -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include -I/scratch/app/pnetcdf/1.10_openmpi-2.0_intel/include -I/scratch/app/hdf5/1.8_openmpi-2.0_intel/include -DINTERNAL_FILE_NML -Duse_libMPI -DusenetCDF -DMAXFIELDMETHODS=500 -DMAXFIELDS_=500 -Duse_netCDF -DHAVE_SCHED_GETAFFINITY -I/scratch/app/netcdf/4.6_openmpi-2.0_intel/include' ac_cv_env_CPP_set= ac_cv_env_CPP_value=
Yes, it appears you have an mpi issue related to the environment on your system. The best thing to do is to get someone who is familiar with the system to help you set up the environment correctly and compile.
Thanks a lot, I'll do that!
However, you can send me your personal e-mail?! Because probably, I'll have other questions about the model run. I want to run the model with ERA 5 data, so, I think that I'll need some help with this, soon.
If it's not a problem for you, you can help me with this. My e-mail is on another question which I sent to you. If you send me an e-mail there, I can talk with you about the ideas which I want to do.
One more time, thank you a lot!
I won't be able to help you with running the model with ERA 5 data as I have never done that. I'm not sure if there is a way to do it either. I can only help you with minor issues or if you find any bugs.
Hummm, okay!
But there's some way to do it with some reanalysis data font, such as GFS, MERRA, or somewhere, that's, Can I use some external data (mainly from the reanalysis) to force the model. Yes?
I think so, but again this is not help that I can provide to you. Sorry.
Okay, and you know someone who could talk with, about these questions?
If you know, please send me his e-mail!
Here's a link to the ESM4 site https://www.gfdl.noaa.gov/earth-system-esm4/
Here is a link to the ESM4 paper https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019MS002015
I hope you are able to figure out how to continue.
Hello, It's me again, Jaime. I'm trying to compile the ESM4 model on the supercomputer of my institution (no on my laptop now). I followed the steps that you show me before in the last year (April of 2021), however, I had some errors again. In this, I try to compile using the gcc.9.3 compiler. I tried a lot to compile with intel, but with unsuccessful.
On the supercomputer are available the module below: [jaime.antonio@sdumont14 These_JFA]$ module avoid
----------------------------------- /scratch/app/modulos ----------------------------------- gcc/10.2 gcc/11.1 gcc/6.5 gcc/7.4 gcc/8.3 gcc/9.3 gdal/2.4 gdal/3.3.2 gdb/9.2 gdl/0.9
hdf4/4.2.13 hdf4/4.2.14_openmpi-2.0.4.2_gnu hdf5/1.12.2_gnu hdf5/1.8 hdf5/1.8_intel hdf5/1.8_openmpi-2.0_gnu hdf5/1.8_openmpi-2.0_intel
hyphy/2.5.32_gnu-openmpi-4.1.1 hypre/2.15_intel hypre/2.15_openmpi-2.0_gnu hypre/2.15_openmpi-2.0_intel intel-oneapi/2022 intel-opencl/2017 intel-opencl/2018 intel_psxe/2016
openmpi/gnu/1.10.7 openmpi/gnu/1.10.7_gnu+ucx_1.6 openmpi/gnu/1.10.7_gnu+ucx_1.9 openmpi/gnu/1.8.6 openmpi/gnu/2.0.4 openmpi/gnu/2.0.4.14 openmpi/gnu/2.0.4.2 openmpi/gnu/2.0.4.2+cuda openmpi/gnu/2.0.4+cuda openmpi/gnu/2.1.1 openmpi/gnu/2.1.6_gcc-8.3+cuda openmpi/gnu/3.1.4 openmpi/gnu/3.1.5_gcc-7.4 openmpi/gnu/4.0.1 openmpi/gnu/4.0.1+cuda openmpi/gnu/4.0.1_gcc-7.4 openmpi/gnu/4.0.3 openmpi/gnu/4.0.3.3 openmpi/gnu/4.0.3+cuda openmpi/gnu/4.0.4 openmpi/gnu/4.0.4_gcc-7.4-cuda openmpi/gnu/4.0.4_ucx_1.12 openmpi/gnu/4.0.4_ucx_1.12+cuda openmpi/gnu/4.0.4_ucx_1.6 openmpi/gnu/4.1.1 openmpi/gnu/4.1.1+cuda openmpi/gnu/4.1.1+cuda-11.1 openmpi/gnu/4.1.2_ucx_1.12 openmpi/gnu/4.1.2_ucx_1.12+cuda openmpi/gnu/4.1.2_ucx_1.12+pmix openmpi/gnu/4.1.3_gcc-7.4 openmpi/gnu/4.1.4+cuda-11.2 openmpi/gnu/4.1+cuda openmpi/gnu/ilp64/DISABLED-2.0.4.2 openmpi/gnu/mt/DISABLED-2.0.4.2 openmpi/gnu/mt/ilp64/DISABLED-2.0.4.2 openmpi/icc/2.0.4 openmpi/icc/2.0.4.2 openmpi/icc/4.0.3 openmpi/icc/4.0.3.3 openmpi/icc/4.0.4 openmpi/icc/debug/DISABLED-2.0.2.10 openmpi/icc/ilp64/DISABLED-2.0.4.2 openmpi/icc/mt/debug/DISABLED-2.0.2.10 openmpi/icc/mt/DISABLED-2.0.4.2 openmpi/icc/mt/ilp64/DISABLED-2.0.4.2 openmx/3.8_intel openpmix/4.1.0_gnu
So, I think that all of the modules are available. Thus I don't understand why when I make:
make gcc=on CLUBB=off after imported, the netcdf, hdf5 and gcc/9.3 or gcc/10.2 does not work, yet.
Please, Can you help me with this?!
I called:
module load netcdf, hdf5 and gcc/9.3 gcc/10.2 make gcc=on CLUBB=off
I certainly will need your help in the future with the run model. Because I'm a beginner in this, and I want to run some experiments with ERA 5 data and I don't no idea how to do this. This is for my Ph.D. research.
Please, give me some help with this.