zopencommunity / metaport

zopen package manager
Apache License 2.0
1 stars 2 forks source link

Metaport unit test library #34

Open IgorTodorovskiIBM opened 11 months ago

IgorTodorovskiIBM commented 11 months ago

We should think about providing a consistent testing structure for meta. Currently it's a bit difficult to parse (as a human) the test results. I'm sure this will become increasing problematic as we add more tests.

One option is to develop our own test library to enable consistency. In the test library, we can implement common functions for comparison, failure, etc.

One model that we can look it is google test, which has functions for comparison such as:

expect_eq(), expect_ge(), etc.

Currently we have guards for every check (I'll ignore the set -e cases) and we print out a unique error message when the condition is not met. We can delegate this to expect_* functions which can print out the error message when the condition is not satisfied:

expect_eq() {
  arg1=$1
  arg2=$2
  whatItsDoing="$3"
  if [ "$arg1" != "$arg2" ]; then
    printError("not ok: $whatItsDoing failed. $arg1 != $arg2")
  else
    printError("ok: $whatItsDoing passed")
  fi
}

Another approach is to leverage an existing shell testing library like bats: https://bats-core.readthedocs.io/en/stable/tutorial.html#your-first-test It depends on bash but that is ok since we already added bash to our list of dependencies for metaport.

v1gnesh commented 11 months ago

Ah I get the 2nd question from the related Discussion post now. Yeah, having a test framework will allow meta itself to look a lot cleaner, when the conditionals etc are shoved into expect_() functions.

Just had a cursory look, but shellspec looks pretty good. https://shellspec.info/why.html