Open astropiuu opened 1 month ago
This is looking really good! Thanks for your efforts. As a broader point (you don't have to address this right now), I think we need to decide whether the status vars using randomisation should dump their randomly generated values somewhere as artifacts to make it easier to debug if something goes wrong. Alternatively, it could be easier to randomly generate seeds and store the seeds as artifacts.
This is looking really good! Thanks for your efforts. As a broader point (you don't have to address this right now), I think we need to decide whether the status vars using randomisation should dump their randomly generated values somewhere as artifacts to make it easier to debug if something goes wrong. Alternatively, it could be easier to randomly generate seeds and store the seeds as artifacts.
Thanks! Yes, dumping status variables would be good. However, I have commented out the tests that randomize the status variables because they break some modules.
@astropiuu can you check why the tests are failing?
Testing framework based on pytest that would run on GitHub Actions
Description
The PR includes
Motivation and Context
The testing framework would assist in identifying whether any new changes break the current functionality. So far, the tests written cover loading of all modules, resetting status variables, modifying status variables, and unit testing for the Odmr Logic module.
How Has This Been Tested?
The framework has been tested on a fork repository using the same test workflow
Types of changes
The PR only changes the pyproject.toml and setup.py files The rest of the newly added files are part of testing and do not modify any existing code
Checklist:
/docs/changelog.md
.