Test cases are currently written in the same Markdown document as the rules they're testing and are manually assigned consecutive numbers. While this works fine when dealing with a small number of test cases, it quickly becomes unwieldy as the number of test cases increases. To ensure consistent implementations of rules, I think that it is in our best interests to make it easy to write, document, and maintain a large number of test cases for rules. As such, finding a more scalable approach to writing test cases would be great.
Some initial thoughts on requirements:
Adding a new test case to a rule must not cause any merge conflicts. As such:
Identifiers for test cases must be constructed without regard for existing test cases.
Each test case must live in its own file named by its identifier.
The reason for why a test case passes, fails, or is inapplicable must be able to be documented alongside the test case.
Test cases are currently written in the same Markdown document as the rules they're testing and are manually assigned consecutive numbers. While this works fine when dealing with a small number of test cases, it quickly becomes unwieldy as the number of test cases increases. To ensure consistent implementations of rules, I think that it is in our best interests to make it easy to write, document, and maintain a large number of test cases for rules. As such, finding a more scalable approach to writing test cases would be great.
Some initial thoughts on requirements: