dtcenter / METplus

Python scripting infrastructure for MET tools.
https://metplus.readthedocs.io
Apache License 2.0
99 stars 37 forks source link

Internal: Determine/document a process to test use cases that can't run in automated test environment #2779

Open georgemccabe opened 1 week ago

georgemccabe commented 1 week ago

Currently there are a few use case that are not run in GitHub Actions for various reasons, like they exceed the memory limit or disk space limit. Some of these are noted in the Contributor's Guide. As part of the release process, we should make sure to run these use cases outside of the GitHub Actions automated testing environment to ensure that they run as expected.

The Contributor's Guide describes the process to add use cases that can't be run by not assigning a number to the use case in the internal/tests/use_cases/all_use_cases.txt file and excluding them from the .github/parm/use_case_groups.json file that determines the groups of use cases to run in the automated tests. We have since added support for a "disabled" key in the use_case_groups.json file to always skip those use cases.

There are a few use cases that are not run in the automated tests, but the reason is unclear because they are not listed in the Contributor's Guide section. This could happen for various external reasons (unrelated to GitHub Actions limitations) and we may be planning on eventually getting these cases working again.

Describe the Task

Time Estimate

This depends on how much of this work we want to complete for this release and which should move to another issue for future work. We should at very least test the use cases that need to be tested!

Sub-Issues

Consider breaking the task down into sub-issues.

Relevant Deadlines

List relevant project deadlines here or state NONE.

Funding Source

Define the source of funding and account keys here or state NONE.

Define the Metadata

Assignee

Labels

Milestone and Projects

Define Related Issue(s)

Consider the impact to the other METplus components.

Task Checklist

See the METplus Workflow for details.

georgemccabe commented 1 week ago

For now, I will start with testing the use cases that are not run in GHA on seneca. Some of the use cases have input data that is provided with the rest of the data for the model_applications categories. Some of them have data in an additional tar file on the web.

I ran each of these using today's MET nightly build directory on seneca. I automate this by setting an environment variable in my .bashrc: export TODAY_YMD=$(date +%Y%m%d) then set the following in mccabe.seneca.conf: MET_INSTALL_DIR=/d1/projects/MET/MET_regression/develop/NB{ENV[TODAY_YMD]}/MET-develop