Open kwsutter opened 5 years ago
Part 1: The microProfile-3.0 convenience feature is only tested to ensure that the feature definitions exist, can be installed, and a server can successfully be brought up. I have verified that this new feature is being run as part of those automated tests. The functional testing aspect of the component features of microprofile (mpHealthCheck, mpMetrics, and mpRestClient) are covered by those respective components.
Part 2: Assessment between a 3 and 4. We have definitely provided enough coverage for the golden path. And, if a server.xml configured with a microProfile-x.y feature does not successfully start, then our normal Liberty processing takes over and provides the necessary logging to determine the error. Though it sounds minimal, I think this is sufficient for these convenience features.
I have also been working with our System Test team to ensure that all testing of the individual component features includes all three flavors of the server.xml configuration:
The individual component features (ie. mpHealth-2.0, mpMetrics-2.0, and mpRestClient-1.3) The microProfile-3.0 feature
Again, not an automated FAT, but this provides extra test coverage to ensure that we're testing the various combinations of the microProfile features.
The automated fats that cover the testing for the convenience feature, microProfile-3.0: https://libfsfe01.hursley.ibm.com/liberty/dev/Xo/release/cl190320190314-0300-_6N6gEEX8Eemzxv6qGNuYMg/overall-fat-feature-deps.json
(This link is from a MicroProfile 2.1 successful build. The main thing is to look at the overall-fat-feature-deps.json file in the General section of the Downloads. The MicroProfile 3.0 changes haven't made it into a FAT run yet, but they will be very similar...)
microprofile-3.0 0 | "com.ibm.ws.install.utility_offline_fat" 1 | "com.ibm.ws.install_fat" 2 | "com.ibm.ws.install_offline_fat"
I'll update this Issue when I have a direct link to the FAT run with the microProfile-3.0 feature.
FAT Focal Approval comment: Hi @kwsutter, please let me know once you have a microprofile-3.0 run and I'll see if I can get this reviewed. I expect I'll only be able to approve once mpRestClient-1.3 and mpHealth-2.0 are also approved (mpMetrics-2.0 is already approved).
FAT Focal Approval comment: Hi @kwsutter, based on our discussions yesterday over Slack and over the phone I won't be granting a FAT Focal Approval until LG-50 (#7946) and LG-75 (#7415) have been approved.
1) Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature. The description should include the positive and negative testing done, whether all testing is automated, what manual tests exist (if any) and where the tests are stored (source control). Automated testing is expected for all features with manual testing considered an exception to the rule.
For any feature, be aware that only FAT tests (not unit or BVT) are executed in our cross platform testing. To ensure cross platform testing ensure you have sufficient FAT coverage to verify the feature.
If delivering tests outside of the standard Liberty FAT framework, do the tests push the results into cognitive testing database (if not, consult with the CSI Team who can provide advice and verify if results are being received)?_
2) Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)