Open IgorTodorovskiIBM opened 12 months ago
Ah I get the 2nd question from the related Discussion post now.
Yeah, having a test framework will allow meta itself to look a lot cleaner, when the conditionals etc are shoved into expect_()
functions.
Just had a cursory look, but shellspec looks pretty good. https://shellspec.info/why.html
We should think about providing a consistent testing structure for meta. Currently it's a bit difficult to parse (as a human) the test results. I'm sure this will become increasing problematic as we add more tests.
One option is to develop our own test library to enable consistency. In the test library, we can implement common functions for comparison, failure, etc.
One model that we can look it is google test, which has functions for comparison such as:
Currently we have guards for every check (I'll ignore the set -e cases) and we print out a unique error message when the condition is not met. We can delegate this to
expect_*
functions which can print out the error message when the condition is not satisfied:Another approach is to leverage an existing shell testing library like bats: https://bats-core.readthedocs.io/en/stable/tutorial.html#your-first-test It depends on bash but that is ok since we already added bash to our list of dependencies for metaport.