Python's unittest library has support for running multiple parameterized sub tests under a single test method (see here). This is helpful for when user want to run multiple checks in a granular fashion, rather than having the test stop on the first failure.
I wrote up a simple example test case to demonstrate this below:
import unittest
class TestPricing(unittest.TestCase):
def test_calculate(self):
# Base price by item
base_price = {"gum": 1.00, "milk": 2.50, "eggs": 2.75}
# Sales tax by state
sales_tax = {"Michigan": 0.06, "Ohio": 0.0575, "New Hampshire": 0.00}
# Loop through each state and item and precompute expected price
precalculated_price = {}
for state in sales_tax:
precalculated_price[state] = {}
for item in base_price:
precalculated_price[state][item] = (1.0 + sales_tax[state]) * base_price[item]
# Intentionally mess up michigan price for gum and ohio price for eggs
precalculated_price["Michigan"]["gum"] = 100.0
precalculated_price["Ohio"]["eggs"] = -3.14159
# Run through nested subtests, by state and item, and double check that logged price matches expected
for state in sales_tax:
with self.subTest(state=state):
for item in base_price:
with self.subTest(item=item):
expected_price = (1.0 + sales_tax[state]) * base_price[item]
logged_price = precalculated_price[state][item]
assert logged_price == expected_price
When I run this test case using python directly (python -m unittest test_price.py -v), I get the following failure information:
test_calculate (test_price.TestPricing) ...
======================================================================
FAIL: test_calculate (test_price.TestPricing) (item='gum', state='Michigan')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/trbrooks/Downloads/test_price.py", line 29, in test_calculate
assert logged_price == expected_price
AssertionError
======================================================================
FAIL: test_calculate (test_price.TestPricing) (item='eggs', state='Ohio')
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/trbrooks/Downloads/test_price.py", line 29, in test_calculate
assert logged_price == expected_price
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=2)
Which is helpful, because t\it tells me which subtests specifically failed in the test.
When I run the same test through testflo (testflo -v test_price.py), I get:
test_price.py:TestPricing.test_calculate ... FAIL (00:00:0.00, 35 MB)
Traceback (most recent call last):
File "/Users/trbrooks/miniconda3/envs/OpenMDAO/lib/python3.9/site-packages/testflo/test.py", line 418, in _try_call
func()
File "/Users/trbrooks/Downloads/test_price.py", line 29, in test_calculate
assert logged_price == expected_price
AssertionError
The following tests failed:
test_price.py:TestPricing.test_calculate
Passed: 0
Failed: 1
Skipped: 0
Ran 1 test using 16 processes
Wall clock time: 00:00:0.92
Which is less helpful, since it doesn't pass up any of the subtest information up.
Can we add support to print out the subtest failure information, like Python does?
Python's unittest library has support for running multiple parameterized sub tests under a single test method (see here). This is helpful for when user want to run multiple checks in a granular fashion, rather than having the test stop on the first failure.
I wrote up a simple example test case to demonstrate this below:
When I run this test case using python directly (
python -m unittest test_price.py -v
), I get the following failure information:Which is helpful, because t\it tells me which subtests specifically failed in the test.
When I run the same test through testflo (
testflo -v test_price.py
), I get:Which is less helpful, since it doesn't pass up any of the subtest information up.
Can we add support to print out the subtest failure information, like Python does?