Open eldang opened 3 years ago
What's the thinking here -- manually verify output once and have a goldenfile to check against? Assert basic bounds for output? How could we test something that really came up, like the SRTM tile boundary?
I'm thinking of this more as regression testing than trying to catch everything. So a basic test suite might be something like:
pytest
to see that those behave as expected. But this is simple enough code that I'm not sure we even need to go into that level of detailIt would be very quick & easy for me to set up the test cases themselves. The part I lack experience with is the infrastructure to get those to run automatically on a PR. https://github.com/CoralMapping has this set up and it's a pretty significant help, but I just use that infrastructure, others maintain it.
Those tests sound like the right trade-off of utility / difficulty to write.
Github actions are a useful and free method for setting up automated tests. An example used for tests is here. If you can get all the dependencies working on one of the pre-set environments, it'd probably be simpler than setting up Docker. I'm not too familiar with using GH actions for testing, but I can help set this up.
Oh, that looks handy. I was definitely thinking of this in terms of auto-building the Docker image, but if we can manage the dependencies on top of a pre-set environment this could be easier to get running.
Is your feature request related to a problem? Please describe. In the absence of automated tests, it's easy to break existing functionality when adding new features.
Describe the solution you'd like I'd like to have some automated testing configured for this project
Describe alternatives you've considered So far I've been running a manual testing script, and it's OK but much too easy to forget a case.