Open dksasaki opened 1 month ago
Great @dksasaki. I will start with item 2, but before starting these tasks, I think we need to update the main and dev, so we can use the gitflow once again. What do you think?
I believe both dev and main are outdated and the 42_feature_including-nudging is the most stable and updated version of all. Since you was the last one to code something here, can you confirm that?
@nilodna you are right, 42_feature_including-nudging is our stable version as of now. I'm opening (and accepting) two pull requests - https://github.com/lhico/pyroms_tools/pull/45, https://github.com/lhico/pyroms_tools/pull/44 .
@nilodna, it might be better to modify the directory structure following item 1
, before proceeding to other changes. This way we can avoid conflicts in pull requests in the future. Before I proceed, below is a scratch of the directory structure.
I created a new branch: https://github.com/lhico/pyroms_tools/tree/43-prepare-for-publication-release .
Let me know what are your ideas.
pyroms_tools/
├── pyroms
└── tools
├── atmos
│ ├── atmos_forcing.py
│ └── config.yaml
├── boundary
│ ├── boundary_create.py
│ ├── boundary_template.py
│ └── config.yaml
├── grid
│ ├── config.yaml
│ └── grid_create.py
├── initial
│ ├── config.yaml
│ ├── initial_create.py
│ └── initial_template.py
├── nudging
│ ├── config.yaml
│ └── nudging_create.py
└── rivers
├── config.yaml
└── rivers_create.py
The branch of pyroms we should use is: https://github.com/lhico/pyroms
I am updating a few things in this package so we can use it with a more recent version of python (3.9).
In case I succeed, I'll create a new repo with our own modified version of pyroms.
UPDATE:
I reconfigured our pyroms version. The bathymetry smoothing function does not depend on any fortran script in pyroms. I am using python 3.9, which was installed as follows (I hope I haven't missed anything):
# package installation
micromamba install -c conda-forge python=3.9
micromamba install -c conda-forge dask netCDF4 ipython
micromamba install -c conda-forge esmpy xarray numpy shapely cf_xarray sparse numba
micromamba install -c conda-forge xesmf
# pyroms installation
git clone https://github.com/lhico/pyroms
micromamba install -c conda-forge scikit-build-core cmake ninja
pip install build
micromamba install -c conda-forge lpsolve55
cd pyroms/pyroms
python setup.py install
cd ../pyroms_toolbox
python setup.py install
cd ../bathy_smoother
python setup.py install
Awesome @dksasaki. Do you want to work in a specific item? I was thinking work over item 1, but since I coded the meteorological extrapolation functions, I can work on this item too.
Let me know your preferences.
@nilodna, I've already started working on item 1 and 7 - I believe that it would be great if you start in the meteorological files and perhaps unit tests - I mean create some idealized files that represents a bathymetry, glorys and atmospheric forcing based on the era5, glorys and gebco files? We can further discuss this part.
BTW, creating a grid, smoothing the topography, initial conditions and boundary conditions were already updated. There are more details i'll need to work on, but it is a start.
@nilodna, I've already started working on item 1 and 7 - I believe that it would be great if you start in the meteorological files and perhaps unit tests - I mean create some idealized files that represents a bathymetry, glorys and atmospheric forcing based on the era5, glorys and gebco files? We can further discuss this part.
Ok, I'll start on the "unit tests", but I don't think we can call them as unit tests because they are not suppose to run automatically during the installation of the package. Instead, they are more a testcases, so the user can run these experiments and compare with what is expected.
I have a few ideas. Give me some days and we can discuss.
One idea is to actually have unit tests while the package is installed - we can create some xarray datasets on the fly and test the scripts to evaluate if there is any issue with them.
95d53f769f1ac37c09723377ef8e9372415d566b I've included a pyproject.toml and a install.sh
install.sh
is a scratch that contain instructions on the installation of dependencies of pyroms_tools. pyproject.toml
configures the installation of the package via python -m pip install .
(or pip install .
)executables should be made available through the terminal as the following is stated in the pyproject.toml
[project.scripts]
make_grid = "pyroms_tools.grid.make_grid_:main"
make_grid_smooth = "pyroms_tools.grid.make_grid_smooth:main"
make_grid_prototype = "pyroms_tools.grid.make_grid_prototype:main"
make_ic_template = "pyroms_tools.initial.ic_file_template:main"
make_ic_build = "pyroms_tools.initial.ic_file_build:main"
make_boundary_template = "pyroms_tools.boundary.boundary_file_template:main"
make_boundary_build = "pyroms_tools.boundary.boundary_file_build:main"
b43e91f9075f5a6d3f32b4130e7088ed510ecebd In order to use one of the command above, type, for instance (an example of the config.yaml
was also included in the root of the package)
make_grid --config config.yaml
make_grid_smooth --config config.yaml
b7581af31e24d737e8fb1078f02936e56cb8a49a I also included a minimum working installation of a fortran wrapping . This package will be moved to a different section of the code, but it is placed at this position for simplicity for now.
95d53f769f1ac37c09723377ef8e9372415d566b an example tool to edit mapped grid values has been added pyroms_tools/rivers/edit_mask.py
. In the future we can use this tools to edit masks
One idea is to actually have unit tests while the package is installed - we can create some xarray datasets on the fly and test the scripts to evaluate if there is any issue with them.
I have been thinking on this comment and I believe we can do something similar to what is done in xarray. They have files containing the expected output and during the installation they create datasets and compare with those files for assertion.
We can do something like that. We can create a domain, initial and boundary conditions beforehand (small files) and share them with the package on the tests folder (tests/expected
). During the installation we can test the creation of these datasets comparing them with the files containing what is expected.
What do you think of that?
That is exactly what I've thought. Perhaps a sea mount case, or an upwelling?
Perfect, we are on the same page. However, I have one concern about this idea that I was trying to figure out.
We want to test the regridding functions for surface/lateral boundary conditions and initial conditions, as well as create a grid with smoothed bathymetry, right? Our default source information is GLORYS for oceanic variables and ERA5 for meteorological variables. My concern is how to test these functions without downloading the source information.
I was thinking of providing small source files with just a few timesteps (GLORYS, with daily frequency, only 1 day; and ERA5, with hourly frequency, a few hours) so that the regridding can be tested with these files. The idealized domain could be something to run an upwelling test case.
I believe this way we can design an idealized test case with "realistic" conditions. We can base this test on the idealized experiments from my thesis that I am finishing setting up.
How does that sound to you?
I don't like this idea very much, because it will make the package heavier. In any case, due to our time constraints, perhaps a good option is to do this and in the future implement functions that create synthetic data.
Perhaps we can put the datasets in a different place (drive/dropbox) and download from there. This avoids having heavy datasets in our package.
Let me know what you think
On Tue, Oct 15, 2024, 18:50 Danilo A. Silva @.***> wrote:
Perfect, we are on the same page. However, I have one concern about this idea that I was trying to figure out.
We want to test the regridding functions for surface/lateral boundary conditions and initial conditions, as well as create a grid with smoothed bathymetry, right? Our default source information is GLORYS for oceanic variables and ERA5 for meteorological variables. My concern is how to test these functions without downloading the source information.
I was thinking of providing small source files with just a few timesteps (GLORYS, with daily frequency, only 1 day; and ERA5, with hourly frequency, a few hours) so that the regridding can be tested with these files. The idealized domain could be something to run an upwelling test case.
I believe this way we can design an idealized test case with "realistic" conditions. We can base this test on the idealized experiments from my thesis that I am finishing setting up.
How does that sound to you?
— Reply to this email directly, view it on GitHub https://github.com/lhico/pyroms_tools/issues/43#issuecomment-2415300902, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEGUADM3GURUZWAPRBF4KT3Z3WLZXAVCNFSM6AAAAABOZCQC4WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJVGMYDAOJQGI . You are receiving this because you were mentioned.Message ID: @.***>
b7581af31e24d737e8fb1078f02936e56cb8a49a 48a25c3dbce89e62e6e840769a55960d917afe1a
I added two installable wrappers for fortran in the source directory. They are in vertical_interpolation
and extrapolate
. To install them type pip install .
within each package. (this step should be taken after installing everything - the instructions are in install.sh)
1b779a9d0d0aed786c4f04b1927a8fdbbb4721e7 The forcing scripts are up-to-date along with the pertinent configuration files. They should be fully functional now.
ae42868ac03d69247c02c8fea3127d24c2150377, 38e0e9342344f2a1a565f1fa406c3d17b8e20ab7 Nudging scripts were included.
95d53f769f1ac37c09723377ef8e9372415d566b
An example of a lightweight mask editor using matplotlib,cartopy and xarray was included. You can find it in pyroms_tools/rivers/edit_mask.py
. Other minor modifications were also made
Next steps are to convert tides into the new standards and finalize the toolkit to deal with rivers.
btw, sorry for not applying a git workflow! I plan to clean up https://github.com/lhico/pyroms_tools/tree/43-prepare-for-publication-release and then start with a proper workflow.
Hey @dksasaki, after some time away from this project, I started working on unit tests (branch 51, based on the 43).
They are under the tests folder, with the following organization:
pyroms_tools
├── pyroms/
└── tools/
├── ...
└── tests
├── expected/
│ ├── bathy.nc
│ ├── expected_grid.nc
├── grid_config_test.yml
├── test_grid.py
├── ...
@dksasaki, I'm checking duplicated functions across the modules. For instance, the function get_dicts()
(on initial and boundary modules) has the same purpose as in function load_config_from_yaml()
, on module grid, but they not only has different names, but also get different arguments and process the yaml file "differently".
After finishing the unit tests and meteorological adjusts, but I created a new item on the original checklist, to reminder us of these adjustments for cleaning the package.
Description: We are starting a to-do list with the objective of releasing this tool in a publication. The following tasks are needed before preparing for the publication:
To-Do List:
Configuration Files:
Bathymetry Smoothing:
Meteorological Remapping:
implement pyproject.toml for this package
Create Unit Testing:
Include Docker Installation for Legacy:
Include Docker Installation with ROMS:
Implement Argparse:
Cleaning package (pylint might help)
Note: This issue will be updated in the future
Update: