NOAA-OWP / ngen

Next Generation Water Modeling Engine and Framework Prototype
Other
84 stars 62 forks source link

Running NextGen at CONUS Scale - A Step by Step Instruction #794

Closed stcui007 closed 5 months ago

stcui007 commented 5 months ago

This PR provides detailed documentations on how to run ngen at CONUS scale.

Additions

NextGen_ON_CONUS.md

Removals

-

Changes

-

Testing

All Linux related commands in the documents have been tested. mpirun jobs listed have been tested, Visualization in brower

Screenshots

Notes

-

Todos

Add Topmodel Add Routing

Checklist

Testing checklist (automated report can be put here)

1.

Target Environment support

stcui007 commented 5 months ago

Thank you Justin for the careful review. I'll implement the suggestions tomorrow.

On Thu, Apr 18, 2024 at 9:30 AM Justin Singh-M. - NOAA < @.***> wrote:

@.**** requested changes on this pull request.

In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570847477:

+# Download the Codes + +To download the ngen source code, run the following commands: + +git clone https://github.com/NOAA-OWP/ngen.git https://github.com/NOAA-OWP/ngen.git +cd ngen + +Then we need all the submodule codes. So run the command below: + +git submodule update --init --recursive + +# Setting up the Environment + +For setting up the build and computation environment, we refer the users to our documentation chapter DEPENDENCIES.md for details. Basically, you will need to have access to C/C++ compiler, MPI, Boost, NetCDF, Cmake, SQLite3. Some of them may already be on your system. Otherwise, you have to install your own version. There are also some required software packages that come with ngen as submodules, such as Udunits libraries, pybind11, and iso_c_fortran_bmi. + +You are most likely need to use Python. For that we recommend setting up a virtual environment. For details, see PYTHON_ROUTING.md. After setting up the Python virtual environment and activating it, you may need install additional python modules depending what ngen submodules you want to run.

⬇️ Suggested change

-You are most likely need to use Python. For that we recommend setting up a virtual environment. For details, see PYTHON_ROUTING.md. After setting up the Python virtual environment and activating it, you may need install additional python modules depending what ngen submodules you want to run. +You most likely need to use Python. For that we recommend setting up a virtual environment. For details, see PYTHON_ROUTING.md. After setting up the Python virtual environment and activating it, you may need install additional python modules depending on what ngen submodules you want to run.


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570848988:

+git clone https://github.com/NOAA-OWP/ngen.git https://github.com/NOAA-OWP/ngen.git +cd ngen + +Then we need all the submodule codes. So run the command below: + +git submodule update --init --recursive + +# Setting up the Environment + +For setting up the build and computation environment, we refer the users to our documentation chapter DEPENDENCIES.md for details. Basically, you will need to have access to C/C++ compiler, MPI, Boost, NetCDF, Cmake, SQLite3. Some of them may already be on your system. Otherwise, you have to install your own version. There are also some required software packages that come with ngen as submodules, such as Udunits libraries, pybind11, and iso_c_fortran_bmi. + +You are most likely need to use Python. For that we recommend setting up a virtual environment. For details, see PYTHON_ROUTING.md. After setting up the Python virtual environment and activating it, you may need install additional python modules depending what ngen submodules you want to run. + +# Build the Executable + +After setting up the environment variables, we need first build the necessay dynamically linked librares. Although ngen has capability for automated building of submodule libraries, we build them explicitly so that users have a better understanding. For simplicity, we display the content a script which we name it build_libs.

⬇️ Suggested change

-After setting up the environment variables, we need first build the necessay dynamically linked librares. Although ngen has capability for automated building of submodule libraries, we build them explicitly so that users have a better understanding. For simplicity, we display the content a script which we name it build_libs. +After setting up the environment variables, we need to first build the necessary dynamically linked libraries. Although ngen has the capability for automated building of submodule libraries, we build them explicitly so that users have a better understanding. For simplicity, we display the content a script which we name it build_libs.


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570857954:

+cmake --build extern/SoilMoistureProfiles/SoilMoistureProfiles/cmake_build --target smpbmi -- -j 2 && + + +Copy the content into the file named `build_libs` and run the command: + + +source build_libs + + +This will build all libraries we need to run `ngen` at the time of this writing. + +Then, with the Python virtual environment activated, we can build the MPI executable using the following script: + + +cmake -S . -B cmake_build_mpi -DCMAKE_C_COMPILER=/local/lib/bin/mpicc -DCMAKE_CXX_COMPILER=/local/lib/bin/mpicxx \ +-DBOOST_ROOT=/home/shengting.cui/usr/boost_1_79_0/ \

⬇️ Suggested change

--DBOOST_ROOT=/home/shengting.cui/usr/boost_1_79_0/ \ +-DBOOST_ROOT= \


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570863244:

  • -DNGEN_WITH_EXTERN_PET:BOOL=OFF \
  • -DNGEN_WITH_EXTERN_NOAH_OWP_MODULAR:BOOL=ON +cmake --build cmake_build_mpi --target all -j 8 +```
  • +For the meaning of each option in the script, see ngen/wiki build page.

  • +Suppose the above script is named build_mpi, execute the following command to build:

  • +source build_mpi

  • +This will build an executable in the cmake_build_mpi directory named ngen and another named partitionGenerator as well as all the unit tests in the cmake_build_mpi/test.

  • +# CONUS Hydrofabric

  • +The CONUS hydrofabric is downloaded from here. The file name under the list is conus.gpkg. It is cautioned that since the data there are evolving and newer version may be available in the future. When using a newer version, be mindful that the corresponding initial configuration file generation and validation for all submodules at CONUS scale are necessary, which may be a non-trivial process due to the shear size of the spatial scale.

⬇️ Suggested change

-The CONUS hydrofabric is downloaded from here. The file name under the list is conus.gpkg. It is cautioned that since the data there are evolving and newer version may be available in the future. When using a newer version, be mindful that the corresponding initial configuration file generation and validation for all submodules at CONUS scale are necessary, which may be a non-trivial process due to the shear size of the spatial scale. +The CONUS hydrofabric is downloaded from here. The file name under the list is conus.gpkg. Note that since the data there is continually evolving, a newer version may be available in the future. When using a newer version, be mindful that the corresponding initial configuration file generation and validation for all submodules at CONUS scale is necessary, which may be a non-trivial process due to the sheer size of the spatial scale.


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570867705:

+cmake --build cmake_build_mpi --target all -j 8 +`` + +For the meaning of each option in the script, seengen/wiki[build](https://github.com/NOAA-OWP/ngen/wiki/Building) page. + +Suppose the above script is namedbuild_mpi, execute the following command to build: + +source build_mpi + +This will build an executable in thecmake_build_mpidirectory namedngenand another namedpartitionGeneratoras well as all the unit tests in thecmake_build_mpi/test. + +# CONUS Hydrofabric + +The CONUS hydrofabric is downloaded from [here](https://www.lynker-spatial.com/#v20.1/). The file name under the list isconus.gpkg. It is cautioned that since the data there are evolving and newer version may be available in the future. When using a newer version, be mindful that the corresponding initial configuration file generation and validation for all submodules at CONUS scale are necessary, which may be a non-trivial process due to the shear size of the spatial scale. + +As the file is fairly large, it is worth some consideration to store it in a proper place, then simply build a symbolic link in thengenhome directory, thus named./hydrofabric/conus.gpkg. Note the easiest way to create the symbolic link is tomakedir hydrofabric` and then create the full path.

⬇️ Suggested change

-As the file is fairly large, it is worth some consideration to store it in a proper place, then simply build a symbolic link in the ngen home directory, thus named ./hydrofabric/conus.gpkg. Note the easiest way to create the symbolic link is to makedir hydrofabric and then create the full path. +As the file is fairly large, it is worth some consideration to store it in a proper place, then simply build a symbolic link in the ngen home directory, thus named ./hydrofabric/conus.gpkg. Note the easiest way to create the symbolic link is to create a hydrofabric directory and then create a link to that directory.


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570870673:

+ +As the file is fairly large, it is worth some consideration to store it in a proper place, then simply build a symbolic link in the ngen home directory, thus named ./hydrofabric/conus.gpkg. Note the easiest way to create the symbolic link is to makedir hydrofabric and then create the full path. + +# Generate Partition For Parallel Computation + +For parallel computation using MPI on hydrofabric, a partition generate tool is used to partition the hydrofabric features ids into a number of partitions equal to the number of MPI processing CPU cores. To generate the partition file, run the following command: + + +./cmake-build_mpi/partitionGenerator ./hydrofabric/conus.gpkg ./hydrofabric/conus.gpkg ./partition_config_32.json 32 '' '' + + +In the command above, conus.gpkg is the NextGen hydrofabric version 2.01 for CONUS, partition_config_32.json is the partition file that contains all features ids and their interconnected network information. The number 32 is intended number of processing cores for running parallel build ngen using MPI. The last two empty strings, as indicated by '', indicate there is no subsetting, i.e., we intend to run the whole CONUS hydrofabric. + +# Prepare the Input Data + +Input data include the forcing data and initial parameter data for various submodules. These depend on what best suit the user need. For our case, as of this documentation, beside forcing data, which can be accessed at ./forcing/NextGen_forcing_2016010100.nc using the symbolic link scheme, we also generated initial input data for various submodules noah-owp-modular, PET, CFE, SoilMoistureProfiles (SMP), SoilFreezeThaw (SFT). The first three are located in ./conus_config/, the SMP initial configus are located in ./conus_smp_configs/ and the SFT initial configs are located in ./conus_sft_configs/.

⬇️ Suggested change

-Input data include the forcing data and initial parameter data for various submodules. These depend on what best suit the user need. For our case, as of this documentation, beside forcing data, which can be accessed at ./forcing/NextGen_forcing_2016010100.nc using the symbolic link scheme, we also generated initial input data for various submodules noah-owp-modular, PET, CFE, SoilMoistureProfiles (SMP), SoilFreezeThaw (SFT). The first three are located in ./conus_config/, the SMP initial configus are located in ./conus_smp_configs/ and the SFT initial configs are located in ./conus_sft_configs/. +Input data includes the forcing data and initial parameter data for various submodules. These depend on what best suits the user's need. For our case, as of this documentation, beside forcing data, which can be accessed at ./forcing/NextGen_forcing_2016010100.nc using the symbolic link scheme, we also generated initial input data for various submodules noah-owp-modular, PET, CFE, SoilMoistureProfiles (SMP), SoilFreezeThaw (SFT). The first three are located in ./conus_config/, the SMP initial configs are located in ./conus_smp_configs/ and the SFT initial configs are located in ./conus_sft_configs/.


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570874067:

+./cmake-build_mpi/partitionGenerator ./hydrofabric/conus.gpkg ./hydrofabric/conus.gpkg ./partition_config_32.json 32 '' '' +`` + +In the command above,conus.gpkgis the NextGen hydrofabric version 2.01 for CONUS,partition_config_32.jsonis the partition file that contains all features ids and their interconnected network information. The number32is intended number of processing cores for running parallel buildngenusing MPI. The last two empty strings, as indicated by'', indicate there is no subsetting, i.e., we intend to run the whole CONUS hydrofabric. + +# Prepare the Input Data + +Input data include the forcing data and initial parameter data for various submodules. These depend on what best suit the user need. For our case, as of this documentation, beside forcing data, which can be accessed at./forcing/NextGen_forcing_2016010100.ncusing the symbolic link scheme, we also generated initial input data for various submodulesnoah-owp-modular,PET,CFE,SoilMoistureProfiles (SMP),SoilFreezeThaw (SFT). The first three are located in./conus_config/, the SMP initial configus are located in./conus_smp_configs/and the SFT initial configs are located in./conus_sft_configs/. + +For code used to generate the initial config files for the various modules, the interested users are directed to this [web location](https://github.com/NOAA-OWP/ngen-cal/tree/master/python/ngen_config_gen). + +The users are warned that since the simulated region is large, some of the initial config parameters values for some catchments may be unsuitable and cause thengenexecution to stop due to errors. Usually, in such cases, eitherngenor the submodule itself may provide some hint as to the catchment ids or the location of the code that caused the error. Users may follow these hints to figure out as to which initial input parameter or parameters are initialized with inappropriate values. In the case of SFT, an initial value ofsmcmax=1.0would be too large. In the case of SMP, an initial value ofb=0.01would be too small, for example. + +# Build the Realization Configurations + +The realization configuration file inJsonformat contains high level information to run angensimulation, such as interconnected submodules, paths to forcing file, shared libraries, initialization parameters, duration of simulation, I/O variables, etc. We have built the realization configurations for several commonly used submodules which are located indata/baseline/`. These are built by adding one submodule at a time, test run for 10 days simulation time. The successive submodules used are:

⬇️ Suggested change

-The realization configuration file in Json format contains high level information to run a ngen simulation, such as interconnected submodules, paths to forcing file, shared libraries, initialization parameters, duration of simulation, I/O variables, etc. We have built the realization configurations for several commonly used submodules which are located in data/baseline/. These are built by adding one submodule at a time, test run for 10 days simulation time. The successive submodules used are: +The realization configuration file, in JSON format, contains high level information to run a ngen simulation, such as interconnected submodules, paths to forcing file, shared libraries, initialization parameters, duration of simulation, I/O variables, etc. We have built the realization configurations for several commonly used submodules which are located in data/baseline/. These are built by adding one submodule at a time, performing a test run for a 10 day simulation. The successive submodules used are:


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570876533:

+# Build the Realization Configurations + +The realization configuration file in Json format contains high level information to run a ngen simulation, such as interconnected submodules, paths to forcing file, shared libraries, initialization parameters, duration of simulation, I/O variables, etc. We have built the realization configurations for several commonly used submodules which are located in data/baseline/. These are built by adding one submodule at a time, test run for 10 days simulation time. The successive submodules used are: + +sloth (conus_bmi_multi_realization_config_w_sloth.json) +sloth+noah-owp-modular (conus_bmi_multi_realization_config_w_sloth_noah.json) +sloth+noah-owp-modular+pet (conus_bmi_multi_realization_config_w_sloth_noah_pet.json) +sloth+noah-owp-modular+pet+cfe (conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe.json) +sloth+noah-owp-modular+pet+smp (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp.json) +sloth+noah-owp-modular+pet+smp+sft (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft.json) +sloth+noah-owp-modular+pet+smp+sft+cfe (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json) + + +# Run Computations with Submodules + +With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition json file for the number of cores used. The command line for running a MPI job is as sollows:

⬇️ Suggested change

-With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition json file for the number of cores used. The command line for running a MPI job is as sollows: +With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition JSON file for the number of cores used. The command line for running a MPI job is as follows:


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570877056:

+sloth+noah-owp-modular (conus_bmi_multi_realization_config_w_sloth_noah.json) +sloth+noah-owp-modular+pet (conus_bmi_multi_realization_config_w_sloth_noah_pet.json) +sloth+noah-owp-modular+pet+cfe (conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe.json) +sloth+noah-owp-modular+pet+smp (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp.json) +sloth+noah-owp-modular+pet+smp+sft (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft.json) +sloth+noah-owp-modular+pet+smp+sft+cfe (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json) + + +# Run Computations with Submodules + +With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition json file for the number of cores used. The command line for running a MPI job is as sollows: + +For a simple example run and quick turn around, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json

⬇️ Suggested change

-run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json +mpiexec -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570877417:

+ + +# Run Computations with Submodules + +With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition json file for the number of cores used. The command line for running a MPI job is as sollows: + +For a simple example run and quick turn around, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json + + +For a more substantial example simulation, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json

⬇️ Suggested change

-run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json +mpiexec -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570878772:

+ +For a simple example run and quick turn around, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json + + +For a more substantial example simulation, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json + + +For an example taken into account more realistic contributions, you can try: +``` +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json conus_partition_32.json

⬇️ Suggested change

-run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json conus_partition_32.json +mpiexec -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json conus_partition_32.json


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570879246:

+ +With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition json file for the number of cores used. The command line for running a MPI job is as sollows: + +For a simple example run and quick turn around, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json + + +For a more substantial example simulation, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json + + +For an example taken into account more realistic contributions, you can try:

⬇️ Suggested change

-For an example taken into account more realistic contributions, you can try: +For an example taking into account more realistic contributions, you can try:


In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570881580:

+ + +For a more substantial example simulation, you can run: + + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json + + +For an example taken into account more realistic contributions, you can try: + +run -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json conus_partition_32.json +`` + +wherengenis the executable we build in the [Building the Executable](#Building the Executable) section. All other terms have been discussed above in details. With the current existing realization config files, the above jobs run 10 days simulation time on CONUS scale. + +Be aware that the above commands will generate over a million output files associated with catchment and nexus ids so if you issue alscommand inngen` directory, it will be significantly slower than usual to list all the file names. The exact time will depend on the computer you are working on.

You may want to note/include the use of the output_root realization config option here, so that outputs can be stored in a separate directory from the source tree.

In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1570882469:

@@ -0,0 +1,173 @@ +# NextGen on CONUS + +This documentation provides instructions on all neccessary steps and components to run NextGen jobs at CONUS scale. Considering the computations large scale, we focus only on running parallel jobs using MPI.

⬇️ Suggested change

-This documentation provides instructions on all neccessary steps and components to run NextGen jobs at CONUS scale. Considering the computations large scale, we focus only on running parallel jobs using MPI. +This documentation provides instructions on all neccessary steps and components to run NextGen jobs at CONUS scale. Considering the computation's large scale, we focus only on running parallel jobs using MPI.

— Reply to this email directly, view it on GitHub https://github.com/NOAA-OWP/ngen/pull/794#pullrequestreview-2009026988, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACA4SRLEKB34ATRWTBFAQ6DY57KHTAVCNFSM6AAAAABGJ6TC5CVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDAMBZGAZDMOJYHA . You are receiving this because you authored the thread.Message ID: @.***>

stcui007 commented 5 months ago

Good catch for both. Thanks.

On Tue, Apr 23, 2024 at 10:46 AM Justin Singh-M. - NOAA < @.***> wrote:

@.**** commented on this pull request.

In doc/NextGen_ON_CONUS.md https://github.com/NOAA-OWP/ngen/pull/794#discussion_r1576489783:

@@ -0,0 +1,174 @@ +# NextGen on CONUS + +This documentation provides instructions on all neccessary steps and components to run NextGen jobs at CONUS scale. Considering the computation's large scale, we focus only on running parallel jobs using MPI. + + Summary + Download the Codes + Setting Up the Environment + Build the Executable +* CONUS Hydrofabric

This doesn't link in the rich diff, does changing the URL slightly to

conus-hydrofabric resolve that?

⬇️ Suggested change

- CONUS Hydrofabric + CONUS Hydrofabric

— Reply to this email directly, view it on GitHub https://github.com/NOAA-OWP/ngen/pull/794#pullrequestreview-2017646668, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACA4SRLZEQL2J5S2JUR2LK3Y6Z64HAVCNFSM6AAAAABGJ6TC5CVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDAMJXGY2DMNRWHA . You are receiving this because you authored the thread.Message ID: @.***>