oceanmodeling / roms

Regional Ocean Modeling System (ROMS)
https://github.com/myroms/roms/wiki
Other
0 stars 0 forks source link

ROMS-CICE coupling for Bering Sea #5

Closed saeed-moghimi-noaa closed 1 week ago

saeed-moghimi-noaa commented 1 year ago

@SmithJos13! , @sdurski , @kurapov-noaa @uturuncoglu @pvelissariou1 @hga007

Description

OSU and OCS is working together towards coupling CICE and ROMS using UFS-Coastal code base

Required configurations

saeed-moghimi-noaa commented 1 year ago

FYI @BahramKhazaei-NOAA

saeed-moghimi-noaa commented 1 year ago

Hi @uturuncoglu

Please add the steps that you suggested in the meeting for @SmithJos13! to look at.

Thanks,

uturuncoglu commented 1 year ago

@saeed-moghimi-noaa @SmithJos13 Thanks to create this issue. Here is the items that we need to check,

hga007 commented 1 year ago

@saeed-moghimi-noaa @uturuncoglu, Yes, I agree with Ufuk. It is much better to use the CICE component available under the UFS. I am unsure what version of CICE it is, but I assume it will be easier to update to CICE6.

We want to add a test case for the CICE-ROMS coupling and ROMS native sea-ice model. I already built the grid for Lake Erie.

uturuncoglu commented 1 year ago

@hga007 JFYI, it 's CICE6 and compatible with CMEPS.

saeed-moghimi-noaa commented 1 year ago

@uturuncoglu @pvelissariou1 @SmithJos13

Hi Ufuk

Do you have any recommendation for Joe on where he can start to compile CICE plus other components needed for the initial test?

Thanks, -Saeed

uturuncoglu commented 1 year ago

@saeed-moghimi-noaa At this point, we have no ant application under UFS-Coastal that builds both CICE, CMEPS, CDEPS and also ROMS. Let me create that. At least that will enable to create executable and can be used for different setups. I suggest following steps: (1) start with DATM+CICE first (I also need to update CMEPS to allow this coupling), (2) then include ROMS like DATM+CICE+ROMS but without any connection between CICE and ROMS and (3) create connection with CICE and ROMS. This allow us to build the configuration from simple to more advanced and see the missing pieces and fill them.

SmithJos13 commented 11 months ago

Hi @uturuncoglu,

I have started trying to get the most recent version ESMF library compiled on a local machine that we have at OSU and I have been running to some issues. The system that I'm trying to compile it on is an intel machine running linux. The Fortran/cpp/c compiler that I am working with is intel v19.1.0 (They are all from the same library). And I'm trying to compile the ESMF using openmpi 3.1.3.

I have set the ESMF_DIR variable to the director where I downloaded the git repository and I have set ESMF_OS=linux, ESMF_COMPILER=intel, and ESMF_COMM=openmpi.

I have set the paths to all my compilers under the openmpi heading in '~/build_config/Linux.intel.default/build_rules.mk' .

The issue that I have been having is it seems like the make file is not able to locate an object file called 'binary.o' . The error is,

'g++: error: minimal: No such file or directory'

I have checked in the folder and there is no file by that name there which I suppose make sense give then error I am getting. I'm wondering if there is another variable I need to tick or enable when trying to compile openmpi? maybe something isn't getting copied that should be? I'd appreciate any suggestions that you have in how I can fix this.

Since I was having some issues with building the openmpi version of the library I tried to compile the mpiuni version of the library as a test case to make sure all my compilers where in order and I was able to successfully able to build that. However I ran into some errors during the verification step of installation. when I run

'make ESMF_TESTEXHAUSTIVE=ON unit_tests_uni'

I get the follow error

/home/server/pi/homes/smithj28/esmf/src/Infrastructure/FieldBundle/src/ESMF_FieldBundle.F90:92.46:

character(len=:), allocatable:: encodeName! storage for packed FB

I'll need to look into this one a bit further. To determine what is going on here. Maybe you have some ideas as well?

I look forward to hearing what you think might be going on. Hopefully I didn't bombard you with too many questions...

Thanks, Joseph

hga007 commented 11 months ago

@SmithJos13, @uturuncoglu. In the past, we had a lot of issues when ifort and g++ (or gcc) were combined. We needed to get the appropriate versions. Do you have access to icc? Also, the OpenMPI complicates matters. Nowadays, we prefer Spack/Stack, but it takes experience to install it.

pvelissariou1 commented 11 months ago

@SmithJos13 If possible use INTEL > 20. As @hga007 mentioned we had similar problems using intel 19.x.x.x versions on certain HPC clusters, both during compilation and run time.

uturuncoglu commented 11 months ago

@SmithJos13 You might also want to check the following web page., https://oceanmodeling.github.io/ufs-coastal-app/versions/main/html/porting.html It mainly explains to install spack-stack and use it to run UFS coastal. It would be a good starting point to port the model to your platform. If you try and has some issue, let me know.

saeed-moghimi-noaa commented 9 months ago

Hi @SmithJos13

Would you please update here where you are at with the ROMS-CICE coupling?

Thanks, -Saeed

sdurski commented 9 months ago

Hi Saeed,

Joey is away this week. So I will try to answer.

We have not had success building the nuopc coupling layer on our local compute server. The rather dated OS on this server precludes installation of proper versions of some prerequisites for NUOPC. We are currently splitting the server in two so that the OS can be upgraded (this week) on one of it's boxes. Then current versions of all the dependencies can be installed. We expect to be able to use the suggested spack-stack approach to get everything built. Our computer support staff have already looked into that portion of the process and built components successfully.

We've asked about access to NOAA resources where NUOPC is already built to speed this process, but have not heard back yet. Running the coupled ROMS-CICE for the Bering Sea application will very likely require more resources than the single box of our compute server can handle, so I think getting that access will be important either way.

We have a standalone version of CICE 'fully optimized' for Bering Sea application and it is producing quite good results (when forced from above with ERA5 reanlysis, and below with ROMS Bering sea model output). I suspect that running a coupled ROMS-CICE using the MCT coupler in COAWST would be straightforward as we have COAWST, with that old style of coupling already up and running on a NASA machine that we have access to. But I understand that NUOPC is a central piece of what we're trying to test here.

I hope this helps. Please let me know if you have any other questions.

Scott

saeed-moghimi-noaa commented 9 months ago

@sdurski

Hi Scott,

Thanks for the update. I have requested your access to HPC and followed up on that several times. I will try again today.

Thanks, -Saeed

SmithJos13 commented 9 months ago

Hi @uturuncoglu,

Our system has been finally upgraded here at osu which means that I have been able to try installing the ufs model dependences with docker + spack-stack. I've followed the directions as they are laid out in the link that you sent (https://oceanmodeling.github.io/ufs-coastal-app/versions/main/html/porting.html) however these seem to be building the ufs model with gcc rather than intel compilers I was under the impression that we wanted to build the ufs-model with intel compilers (maybe I misunderstood). Will it be able to adapt this method to intel compilers if we want to? or is there already instructions out for that?

Anyway, I have followed the directions as they are laid out and I was unsuccessful in building the model. I've rain into an issue with building the dependencies see the following two screen shots for the exact error:

Screenshot 2024-01-04 at 10 55 27 AM Screenshot 2024-01-04 at 10 56 05 AM

Do you have any idea what might be the issue?

also I think the following line in the directions that might have a typo. In section 3 the note that is talking about the proj error and how to fix it there is a line that reads

git cherry-pick upsream 149d194 (and resolve conflicts)

I think that it should be

git cherry-pick 149d194 (and resolve conflicts)

Terminal didn't seem to happy about there being upsream there.

Thanks, Joey

pvelissariou1 commented 9 months ago

Hi @SmithJos13 Joseph, for the argument mismatch in gcc/gfortan version >=10 you need to pass the flags: "-fallow-argument-mismatch -fallow-invalid-boz" to the compiler, until the offending fortran code is fixed. I don't know why you need to build mapl anyway.

SmithJos13 commented 9 months ago

Thanks for the quick reply! Yeah I'm not sure about the either I think that it might a dependency for a dependency or something like that. So is there a particular place in spack that I need to feed this argument?

pvelissariou1 commented 9 months ago

@SmithJos13 I went quickly through the spack-stack code tree and found other packages had the same issue. You might need to create a patch for mapl. If you look at spack/var/spack/repos/builtin/packages/parallelio/gfortran.patch you might get a clue how to incorporate the patch. I cannot help more, I haven't work in spack/spack-stack yet and I don't know much details. @uturuncoglu for sure can help more.

hga007 commented 9 months ago

I always work with ifort. For ROMS-CICE coupling, we need to enhance the export/import states to include sea ice fields exchange in the ROMS standalone NUOPC cap module for the UFS (cmeps_roms.h) and add more entries to the coupling YAML file (*`romscmeps.yaml`**). I added plenty of documentation in https://github.com/myroms/roms_test/blob/main/IRENE/Coupling/roms_data_cmeps/Readme.md

pvelissariou1 commented 9 months ago

@hga007 Hernan hi, I was trying to build ROMS and other models in UFS-Coastal and I encountered the following error (name conflict in libraries):

/apps/oneapi/compiler/2022.0.2/linux/compiler/lib/intel64_lin/libifport.a(abort.o): In function `abort_':
abort.c:(.text+0x20): multiple definition of `abort_'
ROMS-interface/ROMS/libROMS.a(abort.f90.o):/scratch2/STI/coastal_temp/save/Panagiotis.Velissariou/ufs-coastal-coastal_app/tests/build_fv3_coastal/ROMS-interface/ROMS/f90/abort.f90:1: first defined here

I don't know if you have seen this before, but it might be a good idea to rename "SUBROUTINE abort" in ./Utility/abort.F to something like roms_abort to avoid conflicts?

SmithJos13 commented 9 months ago

@pvelissariou1 So I was looking at the file that you pointed me towards and seems like some of the flags that you have listed there are already incorporated in that file. I've tried adding " -fallow-invalid-boz" to the other flags already and I'm still running into the same issue. Is there some particular modification that I need to make to this file to get it to recognize these flags (I know that you just said that you haven't worked with spack-stack before so maybe @uturuncoglu can shine some light on this).

SmithJos13 commented 9 months ago

Actually I was poking around in some of the files in spack/var/spack/repos/builtin/packages/mapl/ and the file in spack/var/spack/repos/builtin/packages/mapl/package.py there looks like there is some form of a patch to handle the gfortran compatibility version mismatch see the screenshot bellow:

Screenshot 2024-01-04 at 12 36 29 PM

So maybe there is something else causing the issue?

uturuncoglu commented 9 months ago

@SmithJos13 First of all, I did not tested custom spack-stack installation with Intel compiler yet. I have a version of Dockerfile that creates the Docker image and able to run RTs but this is again with GNU. I could push that file to ufs-coastal-app repo and you could see the commands that I am using but installing to exiting system could have little bit changes. If I remember correctly, I also have custom spack-stack installation that works on Orion. But again, it is hard to automatize everything and make it work in every custom platforms. It will be always have system issue. I could also try to extract the information from Dockerfile and create a new dependency installation script as much as possible but I need to work on it.

uturuncoglu commented 9 months ago

@SmithJos13 BTW, space-stack 1.5.1 is working for me (this will have ESMF 8.5.0). They are plaining to release also 1.6.0 to fix some issues but I am not sure its timeline.

hga007 commented 9 months ago

For us, spack-stack 1.5.1 with JEDI and UFS support works well in my Linux Box using intel 2021.8.0 and ESMF 8.5.0. In our JEDI meeting today, they started talking about spack-stack 1.6.0. Dave Robertson is working to install 1.5.1 on our other computers, which is not trivial, and he has lots of experience. I hope that spack-stack will be more stable in the future and require less frequent updates.

uturuncoglu commented 9 months ago

@hga007 Maybe I could try to create script to automatize the installation. Then you could try in your side with help of Dave Robertson. If we need some change then we could incorporate it and update. Do you think it can be done? I know you might always manual intervention to the script for specific GNU or Intel version but we could do our best.

SmithJos13 commented 9 months ago

Thanks @uturuncoglu @hga007 I suppose that I'll try work on installing the 1.5.1 version of the spack-stack. I'm a little new to using github... So if I wanted to do that, do I just have to check out the 1.5.1 version of spack-stack and then run "git submodule update --init --recursive" then proceed with the install instructions?

uturuncoglu commented 9 months ago

@SmithJos13 Yes. Let me know if you need help.

SmithJos13 commented 9 months ago

@uturuncoglu

Okay I've tried running the install steps for the v1.5.1 of spack-stack and I've run in to the following issue

Screenshot 2024-01-04 at 1 48 31 PM
uturuncoglu commented 9 months ago

@SmithJos13 I know there are some mistyping in the current documentation (ill update it when I have container for coastal). Here is the part of my Dockerfile. There are some unnecessary steps in here since I am installing everything from scratch (modules tool etc.) but this could be a good reference for you to check your commands.

ARG SPACK_STACK_VERSION=1.5.1
ARG COMPILER_VERSION=11.4.0
ARG OPENMPI_VERSION=4.1.6
...
...
RUN cd /opt && \
  source /etc/profile.d/lmod.sh && \
  git clone --recurse-submodules https://github.com/jcsda/spack-stack.git && \
  cd spack-stack && \
  git checkout ${SPACK_STACK_VERSION} && \
  git submodule update --init --recursive && \
  export SPACK_ROOT=/opt/spack-stack/spack && \
  source setup.sh  && \
  spack stack create env --site linux.default --template ufs-weather-model --name ufs.local --prefix /opt/ufs.local && \
  cd envs/ufs.local/ && \
  spack env activate . && \
  cd /opt/spack-stack && \
  export SPACK_SYSTEM_CONFIG_PATH="$PWD/envs/ufs.local/site" && \
  spack external find --scope system --exclude bison --exclude cmake --exclude curl --exclude openssl --exclude openssh && \
  spack external find --scope system perl && \
  spack external find --scope system wget && \
  spack external find --scope system mysql && \
  spack external find --scope system texlive && \
  spack compiler find --scope system && \
  unset SPACK_SYSTEM_CONFIG_PATH && \
  spack config add "packages:all:compiler:[gcc@${COMPILER_VERSION}]" && \
  spack config add "packages:all:providers:mpi:[openmpi@${OPENMPI_VERSION}]" && \
  spack config add "packages:fontconfig:variants:+pic" && \
  spack config add "packages:pixman:variants:+pic" && \
  spack config add "packages:cairo:variants:+pic" && \
  spack config add "packages:libffi:version:[3.3]" && \
  cd /opt/spack-stack/envs/ufs.local && \
  sed -i 's/tcl/lmod/g' site/modules.yaml && \
  cd /opt/spack-stack && \
  spack concretize 2>&1 | tee log.concretize && \
  spack install --source 2>&1 | tee log.install && \
  spack gc -y  2>&1 | tee log.clean && \
  spack module lmod refresh -y && \
  spack stack setup-meta-modules
uturuncoglu commented 9 months ago

@SmithJos13 BTW, i was getting similar issue in the past but I think there could be some issue also in original spack-stack documentation. It would be nice to open an issue in there. So, they could aware of this issue.

SmithJos13 commented 9 months ago

Thanks for supplying your docker file @uturuncoglu I'll have to see if I run into the same error again. I'll let you know if I run into further problems.

uturuncoglu commented 9 months ago

@SmithJos13 Okay. Note that this is not the complete file but just part of it.

rjdave commented 9 months ago

I have no experience with docker but I can attest to the difficulties of building spack-stack with older OSs. I am currently struggling with the current develop spack-stack and CentOS 7.9. As others have noted here, outdated compilers is the biggest problem I run into. I have not built the stack with GNU compilers in a while but the Intel compilers rely on a modern GNU compiler suite in order to work to their full potential. Recently, I switched to using the jedi-ufs-env (plus nco) instead of the ufs-weather-model-env that is activated by the ufs-weather-model template.

I have doubts that the procedure I use to create a usable spack-stack would translate to a docker setup because I still have a few steps that require hand editing of spack environment files. There may be spack commands to accomplish what I'm doing but I haven't found them yet. I am not sure if this will help anyone but I am including my current procedure for building spack-stack (note some of this syntax will only work if you are using a bash-like shell):

uturuncoglu commented 9 months ago

@rjdave Thanks for the detailed steps. I am working on script that will allow to build dependencies through the spack-stack. I will leverage also from your instructions too to finalize it. At this point, I am planing to test it on Docker container, Orion and also maybe Derecho (NCAR's machine). If it works on those platforms then I making ask you to test it on your platform. Then, we could push it to the ufs-coastal-app and also put some documentation to application. Let me know what you think?

SmithJos13 commented 9 months ago

I just want to kind of collect some information here...

@hga007 and @uturuncoglu are you using docker to build the spac-stack on your local linux boxes when you have tried? What configuration have you tried and found to work for you two? Is gcc or intel a better way to go?

We have currently upgraded our linux box to the latest version of centos (we had our machine upgraded over the winter break) . So we are not trying to build it on an old OS (at least I don't think we are). Am I correct in thinking the docker is supposed to eliminate some of the issues trying to build spack-stack on an old OS since the container that we are using is the latest version of ubuntu?

I would just like to make sure that we are all on the same page here.

SmithJos13 commented 9 months ago

Also @ufuk I've tried checking the commands that I was using vs the commands that you supplied and I'm still having issues with the install of spack-stack.

uturuncoglu commented 9 months ago

@SmithJos13 i could able to have custom spack stack installation on Orion and use it to run the model. At this point I am trying to populate a script for spack stack installation and I'll let you know when it is ready. In the meantime time, if you still have issue with specific command, I recommend to open a ticket in spack-stack side. I was getting the error that you got but now it is fine. I am not sure but it could be related with system and version if some tool that is used by spack-stack. So, it would be nice to inform them about it.

SmithJos13 commented 9 months ago

@SmithJos13 BTW, i was getting similar issue in the past but I think there could be some issue also in original spack-stack documentation. It would be nice to open an issue in there. So, they could aware of this issue.

He @ufuk I think I found out what was causing the issue, I think I didn't have the environment active when I was trying to run those commands. Running spack env activate . from the /opt/spack-stack/envs/ufs.local/ seemed to fix the problem when I was running into it this morning.

Okay so I've been able to step through all your steps in your post and I was able to install the spack-stack (I think...).

Now I'm running into a new issue when it comes to loading the modules. Following on from the installation process at the following link: https://oceanmodeling.github.io/ufs-coastal-app/versions/main/html/porting.html to continue building the model.

I've run:

spack module tcl refresh
spack stack setup-meta-modules

Then continued with the following

. /etc/profile.d/modules.sh
module use /opt/ufs.local/modulefiles/Core
module load stack-gcc/11.4.0

returns

Module ERROR: Magic cookie '#%Module' missing
  In '/opt/ufs.local/modulefiles/Core/stack-gcc/11.4.0.lua'
  Please contact <root@localhost>

Further running: module avail returns:

-------------------- /usr/share/modules/modulefiles ----------
dot  module-git  module-info  modules  null  use.own  

Key:
modulepath 

So there is not any included modules even after the command module use /opt/ufs.local/modulefiles/Core. I'm wondering if there is some further steps I need to do in order to get module to recognize all the modules that were installed using spack. Also I notice that you are using lmod and the other porting instructs are using tcl is there any advantage to one or the other? Appreciate any insight you have into this! Thanks.

rjdave commented 9 months ago

@SmithJos13 I have never setup spack-stack using the old "environment-modules"; I have always used Lmod. That said, it appears that the spack stack setup-meta-modules created an Lmod (note the .lua) style module file instead of a tcl style. My guess is that perhaps you mixed some commands from https://oceanmodeling.github.io/ufs-coastal-app/versions/main/html/porting.html and some from @uturuncoglu's docker file above.

If you look closely, the docker file above replaces tcl with lmod using a sed command (sed -i 's/tcl/lmod/g' site/modules.yaml then later uses spack module lmod refresh -y whereas the porting guide sticks to tcl throughout.

I would also note that as long as you are using a new enough spack-stack (1.5+ I believe), you should be able to eliminate the need for that sed command by adding the --modulesys lmod option to the spack stack create env command as in my notes above.

uturuncoglu commented 9 months ago

@SmithJos13 The installation processes in the https://oceanmodeling.github.io/ufs-coastal-app/versions/main/html/porting.html is not up-to-date and has couple of minor typos. So, please stick to Docker file way. In that case I am using lmod.

SmithJos13 commented 9 months ago

@uturuncoglu Okay I wont look at the instruction in that link. Neglecting the issue tcl vs lmod I think I have installed the dependencies using spack-stack properly. Running the following command spack module lmod refresh returns the following:

==> You are about to regenerate lmod module files for:

-- linux-ubuntu22.04-zen2 / gcc@11.4.0 --------------------------
qrlj37n bacio@2.4.1         gkbs7qp jasper@2.0.32         uxz7xo7 netcdf-fortran@4.6.0    4dz5yiq py-openpyxl@3.0.7
as3bga3 base-env@1.0.0      4rtleko krb5@1.20.1           x2vc3ui numactl@2.0.14          zbq53zl py-pandas@1.5.3
vots4p7 bzip2@1.0.8         7vwz2xv libaec@1.0.6          7ogdbbl openblas@0.3.19         bksx7tp py-pip@23.1.2
wpwhcsg c-blosc@1.21.4      n4ju5jy libbsd@0.11.7         ebu7o6q openmpi@4.1.5           aqdjzwj py-python-dateutil@2.8.2
ervjwhi cmake@3.23.1        kwrd27a libedit@3.1-20210216  ddft6li openssh@9.3p1           y7ynxfj py-pytz@2023.3
4er464d crtm@2.4.0          samj44u libevent@2.1.12       b3mwaoe openssl@1.1.1u          pbm3x2h py-pyyaml@5.4.1
iyenx7h crtm-fix@2.4.0_emc  azt6nfe libffi@3.3            biqw4cd parallel-netcdf@1.12.2  dre7ujh py-setuptools@59.4.0
glxzbwd curl@8.1.2          rmih7z7 libiconv@1.17         hug3ikc parallelio@2.5.10       npora6d py-six@1.16.0
kr7idsa esmf@8.5.0          aguvrlm libidn2@2.3.4         nmdrd6j pcre2@10.42             zgkz6ze python@3.10.8
bkrk5p3 expat@2.5.0         lwofrkt libjpeg-turbo@2.1.0   5c6shhb perl@5.34.0             quztabs readline@8.2
wy6wzsw fargparse@1.5.0     ayvxyiw libmd@1.0.4           sc5nq7w pkg-config@0.29.2       6iyjv5p scotch@7.0.4
x2e3jnb fms@2023.02.01      mmmtzee libpciaccess@0.17     fmsa4r2 pmix@4.2.3              jsn5ytq snappy@1.1.10
xfqo3m5 g2@3.4.5            v752wfl libpng@1.6.37         ubzllqd py-bottleneck@1.3.7     cgmzjgp sp@2.3.3
o2y6hf4 g2tmpl@1.10.2       ct5ujzg libunistring@1.1      skkztha py-cftime@1.0.3.4       cqpjsma sqlite@3.42.0
5m2u4mi gdbm@1.23           3z6g2o5 libxcrypt@4.4.35      3ekcbll py-cython@0.29.35       5sd3vbs tar@1.34
graz6k4 gettext@0.21.1      x7n3gcf libxml2@2.10.3        kllgbjx py-et-xmlfile@1.0.1     vuimzqi ufs-pyenv@1.0.0
3nmv5ub gftl@1.10.0         oglwmae libyaml@0.2.5         ppdprun py-f90nml@1.4.3         yvlksui ufs-weather-model-env@1.0.0
rvqsfku gftl-shared@1.6.1   xi4kgpz lz4@1.9.4             oal2zan py-jdcal@1.3            oal2e5r util-linux-uuid@2.38.1
zw7asim git@2.40.0          gdcpskb m4@1.4.18             2whb7v2 py-jinja2@3.1.2         rcywnkc w3emc@2.10.0
gxmpvaf git-lfs@3.0.2       fo4azyx mapl@2.40.3           5dkshvs py-markupsafe@2.1.3     s7gfo5q wget@1.21.2
j5elfyu hdf5@1.14.0         jtlneao nccmp@1.9.0.1         gczw7pf py-netcdf4@1.5.8        4jezwnc xz@5.4.1
ongf6u4 hwloc@2.9.1         n4r5nrz ncurses@6.4           insttai py-numexpr@2.8.4        phod4zk zlib@1.2.13
xfutlqu ip@4.3.0            6u2x5ao netcdf-c@4.9.2        ihlj53n py-numpy@1.22.3         e4f27s5 zstd@1.5.2
==> Do you want to proceed? [y/n] y
==> Regenerating lmod module files

which appears the be the a complete list of all the dependencies that I need in order to start porting the UFS model. The following command spack stack setup-meta-modules returns:

Configuring basic directory information ...
  ... script directory: /opt/spack-stack/spack/lib/jcsda-emc/spack-stack/stack
  ... base directory: /opt/spack-stack/spack/lib/jcsda-emc/spack-stack
  ... spack directory: /opt/spack-stack/spack
Configuring active spack environment ...
  ... environment directory: /opt/spack-stack/envs/ufs.local
Parsing spack environment main config ...
  ... install directory: /opt/ufs.local
Parsing spack environment modules config ...
  ... configured to use lmod modules
  ... module directory: /opt/ufs.local/modulefiles
Parsing spack environment package config ...
  ... list of possible compilers: '['gcc@11.4.0', 'gcc', 'intel', 'pgi', 'clang', 'xl', 'nag', 'fj', 'aocc']'
  ... list of possible mpi providers: '['openmpi@4.1.6', 'openmpi', 'mpich']'
['openmpi', 'module-index.yaml', 'gcc', 'Core']
 ... stack compilers: '{'gcc': ['11.4.0', '11.4.0']}'
 ... stack mpi providers: '{'openmpi': {'4.1.5': {'gcc': ['11.4.0', '11.4.0']}}}'
  ... core compilers: ['gcc@4.6']
Preparing meta module directory ...
  ... meta module directory : /opt/ufs.local/modulefiles/Core
Creating compiler modules ...
  ... configuring stack compiler gcc@11.4.0
  ... ... CC  : /usr/bin/gcc
  ... ... CXX : /usr/bin/g++
  ... ... F77 : /usr/bin/gfortran
  ... ... FC' : /usr/bin/gfortran
  ... ... COMPFLAGS: 
  ... ... MODULELOADS: 
  ... ... MODULEPREREQS: 
  ... ... MODULEPATH  : /opt/ufs.local/modulefiles/gcc/11.4.0
  ... writing /opt/ufs.local/modulefiles/Core/stack-gcc/11.4.0.lua
  ... configuring stack mpi library openmpi@4.1.5 for compiler gcc@11.4.0
  ... ... MODULELOADS: load("openmpi/4.1.5")
  ... ... MODULEPREREQS: prereq("openmpi/4.1.5")
  ... ... MODULEPATH  : /opt/ufs.local/modulefiles/openmpi/4.1.5/gcc/11.4.0
  ... writing /opt/ufs.local/modulefiles/gcc/11.4.0/stack-openmpi/4.1.5.lua
  ... configuring stack mpi library openmpi@4.1.5 for compiler gcc@11.4.0
  ... ... MODULELOADS: load("openmpi/4.1.5")
  ... ... MODULEPREREQS: prereq("openmpi/4.1.5")
  ... ... MODULEPATH  : /opt/ufs.local/modulefiles/openmpi/4.1.5/gcc/11.4.0
  ... writing /opt/ufs.local/modulefiles/gcc/11.4.0/stack-openmpi/4.1.5.lua
  ... using spack-built python version 3.10.8
 ... stack python providers: '{'python': ['3.10.8']}'
  ... configuring stack python interpreter python@3.10.8 for compiler gcc@11.4.0
  ... writing /opt/ufs.local/modulefiles/gcc/11.4.0/stack-python/3.10.8.lua

which is successful. Then I'm trying to load the module files using environment modules [version: module --version returns Modules Release 5.3.1 (2023-06-27)]. I've tried the following command to point module to the directory where these module files are stored: module use opt/ufs.local/modulefiles/gcc/11.4.0/ and then run module refresh. Typing module avail shows that there are no available modules from the directory (see bellow):

----------------------------------- /usr/local/Modules/modulefiles ----------------------
dot  module-git  module-info  modules  null  use.own  

Key:
modulepath  

which hasn't changed since I installed environment modules in docker.

I'm struggling with figuring out how to load in the modulefile that I have created using spack so that I can use environment modules inside docker.

Is there further steps I need to take the load the modules in? Do I need to use spack to load in the modules? Further is there any special instruction that I need to follow in order to port the UFS module to my local linux box? or should I be able to follow the steps as listed the following site: https://ufs-weather-model.readthedocs.io/en/latest/BuildingAndRunning.html

Thanks for all your help!

uturuncoglu commented 8 months ago

@SmithJos13 That is great. It is really great progress. Since this is a custom machine you need to create set of module files. Are you plaining to run regression tests too. If so, you also need to make couple of changes to run them. There are some commits that is did not next AMS short course related with land component in the following link to run RTs under Docker container,

https://github.com/uturuncoglu/ufs-weather-model/commits/feature/ams_course/

Please look at commits starting with Dec. 5. They could give some guideline to you. If you could not make it. Then, we could have a call and try together.

uturuncoglu commented 8 months ago

@rjdave I was looking to your instructions and I realized that you are not using system provided MPI. Am I wrong? It seems that you are building openmpi with the Intel compiler (system provided module). If so, these instruction will also need to include additional step to add system provided MPI to the spack externals YAML file. Anyway, I am very close to finalize the script for dependency installation. Once I tested with the exiting platforms, I'll let you know.

hga007 commented 8 months ago

Dave lost electricity at his house because of yesterday's storm. So, he doesn't have internet. Yes, we use the OpenMPI versions on our computers to be consistent with all the other libraries that depend on it. When updating Spack-Stack, I believe he first tried to update the needed modules. We did that recently with atlas 0.35.0. I assume that he does the same with OpenMPI. Please remember that the MPI libraries are usually tuned to the particular architecture. Therefore, we shouldn't build from scratch.

SmithJos13 commented 8 months ago

@uturuncoglu

So does everything looks like it was successful from the information that I provided?

Also do you have any example of how these module files were created for other environments that I could work from? If there is any resources that you recommend that I look for doing this I'd appreciate it. I think once these module files are built then I can proceed with trying to install the UFS model!!

I'm a fairly novice unix user so I've been learning a lot going through this process (hence all the questions).

Thanks!

uturuncoglu commented 8 months ago

@SmithJos13 Here is an example that I am using under Docker container. https://github.com/uturuncoglu/ufs-weather-model/blob/feature/ams_course/modulefiles/ufs_local.gnu.lua You could create a new one for your system.

In my case, I am running regression test under Docker (it has slurm and also spack-stack installation) like following,

export MACHINE_ID="local"
export USER="root"
cd /opt
git clone -b feature/ams_course --recursive https://github.com/uturuncoglu/ufs-weather-model.git
cd ufs-weather-model/tests
source /etc/profile.d/lmod.sh
./rt.sh -a debug -k -n datm_cdeps_lnd_era5 gnu

Of course, this is for different configuration not related with coastal app and it also uses feature/ams_course fork of the ufs-weather-model that has extra changes related with the RT system but idea is same.

You might able to compile one of the coastal specific configuration like following (if you name your module file like ufs_local.gnu.lua) in your case but I think you still need to adapt RT system to make it work.

./compile.sh "ufs_local " "-DAPP=CSTLF -DCOORDINATE_TYPE=SPHERICAL -DWET_DRY=ON" coastal gnu NO NO

@pvelissariou1 @saeed-moghimi-noaa I am not sure but we might arise an issue to ufs-weather-model developers to have more flexible RT system that allows to bring new machines. Anyway, we could discuss it more in our next meeting.

SmithJos13 commented 8 months ago

@uturuncoglu Thanks for the information. I think I've made a little more progress on this. I've been able to clone a clean version of the ufs-weather-model into a working directory on my local machine. ie git clone --recursive https://github.com/ufs-community/ufs-weather-model.git ufs-weather-model then I proceeded to make a module file for my local machine in /modulefile/ufs_local.gnu.lua in which contains all the information in the link that you supplied (https://github.com/uturuncoglu/ufs-weather-model/blob/feature/ams_course/modulefiles/ufs_local.gnu.lua).

Then I sourced the following source /etc/profile.d/lmod.sh so that I have a lua version of environment modules running (I might add this to my .bashrc in the future so I can skip this step. I then proceeded with specifying the folder where the modules are located using module use /opt/ufs-weather-model/modulefiles. Then calling module avail yields:

---------------------------------------------------------- /usr/share/lmod/lmod/modulefiles ----------------------------------------------------------
   Core/lmod/6.6    Core/settarg/6.6

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".

root@8a7cc2fd8a3a:/# module use /opt/
/opt/intel              /opt/modules-5.3.1      /opt/scratch            /opt/spack-stack        /opt/ufs-weather-model  /opt/ufs.local
root@8a7cc2fd8a3a:/# module use /opt/ufs-weather-model/
/opt/ufs-weather-model/.git                /opt/ufs-weather-model/FV3                 /opt/ufs-weather-model/doc
/opt/ufs-weather-model/.github             /opt/ufs-weather-model/GOCART              /opt/ufs-weather-model/driver
/opt/ufs-weather-model/AQM                 /opt/ufs-weather-model/HYCOM-interface     /opt/ufs-weather-model/modulefiles
/opt/ufs-weather-model/CDEPS-interface     /opt/ufs-weather-model/MOM6-interface      /opt/ufs-weather-model/stochastic_physics
/opt/ufs-weather-model/CICE-interface      /opt/ufs-weather-model/NOAHMP-interface    /opt/ufs-weather-model/tests
/opt/ufs-weather-model/CMEPS-interface     /opt/ufs-weather-model/WW3                 
/opt/ufs-weather-model/CMakeModules        /opt/ufs-weather-model/cmake               
root@8a7cc2fd8a3a:/# module use /opt/ufs-weather-model/modulefiles/
root@8a7cc2fd8a3a:/# module avail

--------------------------------------------------------- /opt/ufs-weather-model/modulefiles ---------------------------------------------------------
   ufs_acorn.intel      ufs_expanse.intel    ufs_hercules.gnu      ufs_linux.intel     ufs_noaacloud.intel    ufs_stampede.intel
   ufs_common           ufs_gaea-c5.intel    ufs_hercules.intel    ufs_local.gnu       ufs_odin               ufs_wcoss2.intel
   ufs_derecho.gnu      ufs_hera.gnu         ufs_jet.intel         ufs_macosx.gnu      ufs_orion.intel
   ufs_derecho.intel    ufs_hera.intel       ufs_linux.gnu         ufs_macosx.intel    ufs_s4.intel

---------------------------------------------------------- /usr/share/lmod/lmod/modulefiles ----------------------------------------------------------
   Core/lmod/6.6    Core/settarg/6.6

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".

So environment modules is able recognize the module files supplied by the ufs-weather-model. Then trying module load ufs_local.gnu yields:

Lmod has detected the following error:  Unable to load module: python/3.10.8
     /opt/ufs.local/modulefiles/gcc/11.4.0/python/3.10.8.lua : [string "-- -*- lua -*-..."]:20: attempt to call global 'depends_on' (a nil value)

While processing the following module(s):
    Module fullname      Module Filename
    ---------------      ---------------
    python/3.10.8        /opt/ufs.local/modulefiles/gcc/11.4.0/python/3.10.8.lua
    stack-python/3.10.8  /opt/ufs.local/modulefiles/gcc/11.4.0/stack-python/3.10.8.lua
    ufs_local.gnu        /opt/ufs-weather-model/modulefiles/ufs_local.gnu.lua

I suspect the issue is that I need to adapt the module file a little more for my machine, but I'm not really sure what needs to be modified in /modulefile/ufs_local.gnu.lua from just looking at it (besides the paths to the compilers).

rjdave commented 8 months ago

@rjdave I was looking to your instructions and I realized that you are not using system provided MPI. Am I wrong? It seems that you are building openmpi with the Intel compiler (system provided module). If so, these instruction will also need to include additional step to add system provided MPI to the spack externals YAML file. Anyway, I am very close to finalize the script for dependency installation. Once I tested with the exiting platforms, I'll let you know.

Since I have only been building spack-stack on single node machines I have been allowing spack-stack to compile Open MPI. I am currently working on getting spack-stack setup on our university cluster so I will experiment with the necessary steps to use system provided MPI.

uturuncoglu commented 6 months ago

@pvelissariou1 @janahaddad @saeed-moghimi-noaa i am planning to move this to ROMS repo since it is related with ROMS. I'll also open a new ticket for ROMS CICE coupling in general.