The main report model is currently a rich object model which is also semi-immutable (via constructor injection). However, this is somewhat problematic where it comes to serialization and deserialization. In order to make serialization easier, the classes should be reworked into POCOs which are fully mutable.
Summary
This big change was prompted by timeouts from the NUnit test runner. Screenplay's previous mechanism involved building an in-memory object model for the report, then writing it all at the end of the test run. This proved to be unworkable because the test runner gives only a short time after all tests have completed. If an 'at end of test run' handler takes too long then it is forcibly terminated and the test runner exits with an error.
The new reporting mechanism needs to 'stream' the report information into a JSON file as each scenario completes. This spreads out the work of writing the report during the course of the test run. This, there is no longer a 'big job' to do at the end of the test run.
However, the current report model cannot be streamed into a file as it stands (without the work in this ticket), because it is too complex and requires the report to be completed before it may be constructed.
The main report model is currently a rich object model which is also semi-immutable (via constructor injection). However, this is somewhat problematic where it comes to serialization and deserialization. In order to make serialization easier, the classes should be reworked into POCOs which are fully mutable.
Summary
This big change was prompted by timeouts from the NUnit test runner. Screenplay's previous mechanism involved building an in-memory object model for the report, then writing it all at the end of the test run. This proved to be unworkable because the test runner gives only a short time after all tests have completed. If an 'at end of test run' handler takes too long then it is forcibly terminated and the test runner exits with an error.
The new reporting mechanism needs to 'stream' the report information into a JSON file as each scenario completes. This spreads out the work of writing the report during the course of the test run. This, there is no longer a 'big job' to do at the end of the test run.
However, the current report model cannot be streamed into a file as it stands (without the work in this ticket), because it is too complex and requires the report to be completed before it may be constructed.