Open johanenglund opened 5 days ago
I have several thoughts regarding this:
From one extreme, linking to external tools could be seen as out of scope for StrictDoc simply because there is a lot to do. Consider https://github.com/bmw-software-engineering/lobster, a tool by BMW, that tries to become a hub for connecting multiple tools in a single aggregated report. The tool is not mature enough for me to consider interfacing with it but the concept and the format they are creating is very promising.
At the same time, there are a lot of opportunities for doing everything within StrictDoc and have it consume data from multiple tools just like Lobster does it. This is very attractive at minimum for the topic of verification reports.
With the verification report we have a bit of chicken-and-egg situation - to link back to requirements, the requirements have to be published somewhere and traced to the tests. After the tests have run, a test report is obtained and then what? Does it mean that StrictDoc has to run again and include the test reports this second time?
Using Lobster as a reference for a possible implementation, we very likely need to interface with your specific tool. Could you specify which tool are you using and which format it uses for producing test reports?
StrictDoc can be used outside of StrictDoc, see its Python API sketch: https://strictdoc.readthedocs.io/en/latest/latest/docs/strictdoc_01_user_guide.html#14-Python-API. I could imagine maintaining a set of standalone scripts even without integrating them directly into StrictDoc's main program. For example, such a script for a tool XYZ could generate a test report to StrictDoc and then export .sdoc files to a test/ folder. Then a normal StrictDoc invocation would generate a larger tree with everything combined together. The advantage of this would be that there would be a dedicated XYZ script doing just that job.
It is a very interesting feature and it would be great to find a practical implementation path.
I've been playing around with some potential solutions to this issue that does not require strictdoc modifications.
At the moment I'm using XSLT to transform the JUnit XML into an .sdoc together with the doxygen tagfile xml. I've created a TESTCASE grammar where each test case specified in .sdoc gets a UID, this UID exports to the doxygen tagfile so that I can use it in the XSLT phase.
I'm using google test so the .cc code can look like this:
/**
* @brief Verify that steering angle react to lateral displacement
*
* @relation(TestCaseVc.Steering1, scope=function)
*
*/
TEST(TestCaseVc, Steering1)
{}
The corresponding UID for the test case specification in .sdoc looks like below so that the test name in the JUnit file matches the strictdoc UID.
[TESTCASE]
UID: TestCaseVc.Steering1
For test cases in the JUnit XML that has corresponding strictdoc tests the XSLT creates a [LINK:]. I imagine that very low level tests e.g. parameter range checks etc will not get a test case written in strictdoc.
The generated .sdoc is then included into the strictdoc export of the requirement repo via [DOCUMENT_FROM_FILE].
Adding the python for transformation and the xslt here if anyone would be interested.
The resulting RST renders like this for two tests where one test case was defined in .sdoc and the other not.
Still some details to iron out.
One little problem I've encountered is that the @relation does not seem to work for my custom grammar test case, i.e. I do not get forward traceability from my .sdoc testcase to code. Not 100% sure if that is strictly necessary but something to think upon.
I am heading off but the last thought today is exactly similar to your message above (had 10 seconds to scan through it):
maybe a good start into discussing this would be to think through a minimal and a quickest possible script or set of scripts that we could write to achieve what you need. Then take it from there and develop into a more general solution if needed.
I will read your message in detail tomorrow.
Slightly evolved today with a TEST_RESULT element which is created by the xslt transform in my CI pipeline from (JUnit and tagfile). This means that I get forward traceability from TEST_CASE to TEST_RESULT.
[GRAMMAR]
ELEMENTS:
- TAG: TEST_CASE
FIELDS:
- TITLE: MID
TYPE: String
REQUIRED: False
- TITLE: UID
TYPE: String
REQUIRED: False
- TITLE: METHOD
TYPE: SingleChoice(Automatic, Manual)
REQUIRED: True
- TITLE: OBJECTIVE
TYPE: String
REQUIRED: True
- TITLE: DESCRIPTION
TYPE: String
REQUIRED: True
- TITLE: INPUT
TYPE: String
REQUIRED: False
- TITLE: PASS_CRITERIA
TYPE: String
REQUIRED: False
RELATIONS:
- TYPE: Parent
- TAG: TEST_RESULT
FIELDS:
- TITLE: UID
TYPE: String
REQUIRED: False
- TITLE: STATUS
TYPE: SingleChoice(Passed, Failed)
REQUIRED: True
- TITLE: CONTENT
TYPE: String
REQUIRED: True
RELATIONS:
- TYPE: Parent
The report render like below at the moment:
Inclusion of results of manually performed tests is a bit trickier. I'm thinking I'll have to resort to creating result files that are named with the git hash so that the CI can automatically find the correct file and include it, e.g. test_results_8d9abc2.sdoc. But where to store it and how to version it is the question.
I'd also need to figure out an easy and user friendly way to author these manual test results from [TEST_CASE] nodes.
Description
As a developer of a software product I want to
Some requirements are verified in CI pipeline while others are verified by hand.
Problem
Currently there is no direct support of taking in dynamic data at strictdoc export runtime to facilitate creation of a verification report. A feature or recommended process to enable creation of the required traceability is desired.
UIDs of test cases should ideally be preserved over the project life cycle. Redundant/duplicated information should be avoided.
Solution
?