We need an effort in our test suites to markup tests with labels that describe the code under test's functionality. I think the best way for now would be to categorize them from a top-down perspective, starting from the most broad description of the suite to the specific type of test.
The idea is that this would allow a developer to make changes a run a particular suite of tests based on label. I might be working on tests against the '11.6.0' version of BigIP and will naturally want to run those tests as I'm developing.
Note that this doesn't mean that we can only run targeted tests and then call it a day. We still have to make sure all tests pass before being merged in. There's more here to do, but I think starting at the top level might be easiest, rather than labeling one test a 'fuzzy' and having no consensus on what that means.
The definition of done is test labels at least for the following:
We need an effort in our test suites to markup tests with labels that describe the code under test's functionality. I think the best way for now would be to categorize them from a top-down perspective, starting from the most broad description of the suite to the specific type of test.
The idea is that this would allow a developer to make changes a run a particular suite of tests based on label. I might be working on tests against the '11.6.0' version of BigIP and will naturally want to run those tests as I'm developing.
Note that this doesn't mean that we can only run targeted tests and then call it a day. We still have to make sure all tests pass before being merged in. There's more here to do, but I think starting at the top level might be easiest, rather than labeling one test a 'fuzzy' and having no consensus on what that means.
The definition of done is test labels at least for the following: