If we could gather all information about tests that we had written through reflection, why don't we use this information to invoke each test automatically? It will means that we don't need to write all this Run() methods, we just focus on writing test and they results will be included in test's report automatically.
What means to run the test? It means to call the method of test, providing sensible parameters. In the case of test methods parameters is input data and expected result. Currently test class consists of two parts: test itself and huge Run() method, that contains pairs of "input data-expected result" which can be called test case and calls the tests methods for each test case.
I suggest instead of this to write tests in a such fashion:
[TestFixture(typeof(Calculator))]
class CalculatorFixture
{
[Test]
[Covers(nameof(Calculator.Substract))]
[TestCase(3, 5, -2)]
[TestCase(-2, 3, -5)]
[TestCase(5, -2, 7)]
[TestCase(-3, -2, 1)]
void ShouldSubstractNumbers(double number1, double number2, double expectedResult)
{
var calculator = new Calculator();
calculator.Substract(number1, number2).ShouldBeEqual(expectedResult);
}
[Test]
[Covers(nameof(Calculator.SetComplexNumberProvider))]
[Covers(nameof(Calculator.Substract))]
[Throws<ComponentNotRegistredException>()]
void ShouldThrowIfComplexNumberSubstractedWithoutProvider()
{
var calculator = new Calculator();
calculator.SetComplexNumberProvider(null);
var coplexNumber1 = new ComplexNumber(1, 3);
var coplexNumber2 = new ComplexNumber(0, -1);
calculator.Substract(coplexNumber1, coplexNumber2).ShouldThrows();
}
There are several important moments here:
To avoid repeat of functional class type in each [Covers] attribute we will use reload of [TestFixture] attribute that accepts nonobligatory parameter Class and reload of [Covers] attribute in which Class parameter also nonobligatory. During process of building functionality report we use TestFixture.Class as default for all [Covers], that don't have own Class parameter, If they have own than it used instead. If none of [TestFixture] and [Covers] have that parameter than a warning with test method name a generated and included in functionality analysis report.
To easily operate with test cases we will store them in [TestCase] attributes, that contains param parameters, because different test may have different number of input parameters. The last parameter always will be treated as expected result value.
The return value of each test will be always void, instead of bool. To signal for tests-runner method whether this test passed or fail we will use mechanism based on exceptions. If method performs without any exception, that it passed, otherwise - failed. This mechanism will help to keep code of test as simple as possible.
Reversed mechanism of exceptions is applied when test has [Throws] attribute. This attribute accept type parameter of expected exception. If method performs with throwing exactly this type exception than it passed, if without any exception or with exception of any other type than it failed.
It is very important that test name should be meaningful phrase in a form of declaration of some desired behavior of tested system. Later this names will be used in the report:
Calculator:
ShouldSubstractNumbers [fail, given "number1=-3, number2=-2" returns "-1" instead of "1")]
ShouldThrowIfComplexNumberSubstractedWithoutProvider [pass]
...
Please, pay attention to the form of reporting failure case. Try to figure out, how such form can be obtained.
To keep our expectations clear and to avoid of explicit throwing exceptions when expectations are not fulfilled, we will use extensions methodsShould.... This methods will be chained with result values and perform comparison of that value with given expected result, and throw a TestFailedException if they do not match. A tricky case of calculator.Substract(coplexNumber1, coplexNumber2).ShouldThrows(). Extension method should not be called, if previous method Substract() throwed an exception. If it somehow not throwed and the extension method is called, than now the ShouldThrows() itself will throw a TestFailedException. Of course, by the logic of 4th clause
we can just leave this test without any check -- it will failed of it performed without any exception, but using of Should... extension method will make our expectations more clear.
Note
This test engine design is mostly inspired by NUnit test library, so fill free to read it's documentation and propose to implementation those functional from there that we do not covered here.
Tasks
[x] Make FuncAnalyzerReport class to work with split definition of Class and Method between [Covers] and [TestFixture].
[x] Create a class TestRunner that found in given assembly all test methods and corresponding and invoke them in try...catch construction, with exception-based policy of test passing. Results of test-running should not be printed immediately but instead stored in collection of records and when running is finished an event should be raised with that collection as argument.
[x] Support case when for test is provided an attribute [Throws]
[x] Create a class TestReporter that print collection of records as a report to file with Markdown formatting. Bind this reporter to runner event.
If we could gather all information about tests that we had written through reflection, why don't we use this information to invoke each test automatically? It will means that we don't need to write all this
Run()
methods, we just focus on writing test and they results will be included in test's report automatically.What means to run the test? It means to call the method of test, providing sensible parameters. In the case of test methods parameters is input data and expected result. Currently test class consists of two parts: test itself and huge
Run()
method, that contains pairs of "input data-expected result" which can be called test case and calls the tests methods for each test case.I suggest instead of this to write tests in a such fashion:
There are several important moments here:
Class
and reload of [Covers] attribute in whichClass
parameter also nonobligatory. During process of building functionality report we useTestFixture.Class
as default for all[Covers]
, that don't have ownClass
parameter, If they have own than it used instead. If none of[TestFixture]
and[Covers]
have that parameter than a warning with test method name a generated and included in functionality analysis report.[TestCase]
attributes, that containsparam
parameters, because different test may have different number of input parameters. The last parameter always will be treated as expected result value.void
, instead ofbool
. To signal for tests-runner method whether this test passed or fail we will use mechanism based on exceptions. If method performs without any exception, that it passed, otherwise - failed. This mechanism will help to keep code of test as simple as possible.[Throws]
attribute. This attribute accept type parameter of expected exception. If method performs with throwing exactly this type exception than it passed, if without any exception or with exception of any other type than it failed.Please, pay attention to the form of reporting failure case. Try to figure out, how such form can be obtained.
Should...
. This methods will be chained with result values and perform comparison of that value with given expected result, and throw aTestFailedException
if they do not match. A tricky case ofcalculator.Substract(coplexNumber1, coplexNumber2).ShouldThrows()
. Extension method should not be called, if previous methodSubstract()
throwed an exception. If it somehow not throwed and the extension method is called, than now theShouldThrows()
itself will throw aTestFailedException
. Of course, by the logic of 4th clause we can just leave this test without any check -- it will failed of it performed without any exception, but using ofShould...
extension method will make our expectations more clear.Note
This test engine design is mostly inspired by NUnit test library, so fill free to read it's documentation and propose to implementation those functional from there that we do not covered here.
Tasks
FuncAnalyzerReport
class to work with split definition ofClass
andMethod
between[Covers]
and[TestFixture]
.TestRunner
that found in given assembly all test methods and corresponding and invoke them intry...catch
construction, with exception-based policy of test passing. Results of test-running should not be printed immediately but instead stored in collection of records and when running is finished an event should be raised with that collection as argument.[Throws]
TestReporter
that print collection of records as a report to file with Markdown formatting. Bind this reporter to runner event.References
param MethodInvoke NUnit Markdown