For each function, we should design a guideline of pytest to do (avoiding future issue like #58 )
Pytest should consist of (example for the function f(a,b,c=1)= (a+b)/c
PYTEST CHECKLIST
[ ] unitary test (eg "1+1"): the most simple test that should pass with no optional parameters
[ ] unitary fail test (eg "1 + NaN" and ""NaN+1"): the most simple that should fail with no optional parameters. It should be catch with pytest.raises(ERRORNAME, match=EXPECTED_ERROR_MSG):
[ ] more advanced test (eg "2+3"): an advanced test to check if the parameters are actually influencing the output. The could be more than one, but it should be kept to minimum
[ ] parameters test: the most simple test for each optional parameter. Objective: check if the parameter is taken into account. (eg "(1+1)/2")
[ ] parameters fail: the most simple fail for each optional parameter. Objective: check if a wrong parameter is actually detected (eg "(1+1)/0")
Note that with this simple function, we already have 5 different pytests but it growth quickly.
The important catch here is doing pytest while coding the function is done quite fast. However coming back to the function after takes one order of magnitude more time to do it. Actually, according to Scrum of Jeff Sutherland, around 24 times more time, e.g. 1 hour of pytest while coding takes 3 working days to make it right the second time. And that, without counting all the possible unseen mistakes+re-catching the code.
I need to add that somewhere in the guideline of CONTRIBUTING.md
Meanwhile this issue will stay open.
Now, we should consider while reviewing PR to check that ALL of the mentioned tests are included.
EDIT:
PYTEST SANITY CHECK
we shouldn't test results again results that are validated by another function's pytests. example:
we have f(a,b,c=1)= (a+b)/c with already the pytests mentioned earlier
we write a new function g(d, a,b,c=1) = f(a,b,c) + d.
Pytests should look like:
[ ] test d pass : e.g. ("(1+1)/1 +1") with no optional
[ ] test d fails : e.g. ("(1+1)/1 + NaN")
[ ] test unique complex test : ("(5+11)/17 + Pi") in which each inherited parameters contribution is easy to separate.
[ ] a, b and c parameters should be tested with f(a,b,c=1) function and it shouldn't be repeated for g()
For each function, we should design a guideline of pytest to do (avoiding future issue like #58 )
Pytest should consist of (example for the function
f(a,b,c=1)= (a+b)/c
PYTEST CHECKLIST
pytest.raises(ERRORNAME, match=EXPECTED_ERROR_MSG):
Note that with this simple function, we already have 5 different pytests but it growth quickly.
The important catch here is doing pytest while coding the function is done quite fast. However coming back to the function after takes one order of magnitude more time to do it. Actually, according to Scrum of Jeff Sutherland, around 24 times more time, e.g. 1 hour of pytest while coding takes 3 working days to make it right the second time. And that, without counting all the possible unseen mistakes+re-catching the code.
I need to add that somewhere in the guideline of CONTRIBUTING.md
Meanwhile this issue will stay open.
Now, we should consider while reviewing PR to check that ALL of the mentioned tests are included.
EDIT:
PYTEST SANITY CHECK
we shouldn't test results again results that are validated by another function's pytests. example:
f(a,b,c=1)= (a+b)/c
with already the pytests mentioned earlierg(d, a,b,c=1) = f(a,b,c) + d
. Pytests should look like:d
pass : e.g. ("(1+1)/1 +1") with no optionald
fails : e.g. ("(1+1)/1 + NaN")f(a,b,c=1)
function and it shouldn't be repeated forg()