tcunit / TcUnit

An unit testing framework for Beckhoff's TwinCAT 3
Other
266 stars 72 forks source link

Export results in Xunit XML format #11

Closed fedepell closed 4 years ago

fedepell commented 5 years ago

Connecting also to #7 it would be nice if the framework could output the results in standard Xunit XML format. This way other integration software (ie. Jenkins you mention) can then easily manage and display the results (ie. for Jenkins with the Junit Jenkins plugin).

I believe this pointers can be useful for the format: https://gist.github.com/erikd/4192748 https://gist.github.com/nil4/7a3cd9c23835ec6b126fe588e836a2e8 https://github.com/windyroad/JUnit-Schema

sagatowski commented 5 years ago

Excellent suggestion! Will move this to a new milestone.

Aliazzzz commented 5 years ago

Dear Jakob,

A file factory could be very useful for generating different types of output, like Xunit XML, JSON or whatever.

I did some research on the this subject and the "Abstract Factory" pattern could be very suitable to implement the filefactory.

However, there are very important details to consider when designing the filefactory as an "Abstract Factory". I found some very interesting information on this topic offline (in some .chm files) and googled for it but I had no luck in finding it quickly. It is written for CODESYS point of view, but I bet it is interoperable with TwinCAT.

In CODESYS there are two possibilities of dynamically creating an FB instance:

A.) By using the operators NEW and DELETE and a MemoryPool which has to be parameterized for the respecting application via CODESYS.

However this method has a significant disadvantage: The MemoryPool is defined once for the application and all the referenced libraries within the application. But it is not possible to determine how much storage space each library is allowed to claim in the MemoryPool. The coincidental utilization of NEW and DELETE with different sized FB instances causes a fragmentation of the MemoryPool. Because of that it is not possible to securely install any system for a non-stop operation throughout the entire year. An occasional reboot of the facility would be necessary to re-establish the MemoryPool. That is why the MemoryPool and the operators New and Delete can only be recommended for applications in very special cases. Especially when dealing with libraries this method should not be used!

B.) By using the library FBFactory there is a second and more reliable possibility to meet the demands of dynamically creating an FB instance. With the help of the FBFactory Library a static pool (which can be extended into a dynamic pool if needed) for each factory is defined in advance. The management of the memory space is organized in way which makes sure that no fragmentation is possible. Therefore there is no obstacle to a non-stop operation of the system.

FBFactory Library can be used as a base for building factories. In order to get more information about how to use this library for building factories please download the Test Project which you will find in the documentation of the FBFactory Library

So, I'd like to try and implement the filefactory using the 'Abstract Factory' pattern via method B. If everything works out, it will be usable in TcUnit as well as CfUnit and function as a central filefactory distribution point.

Please let me know what you think of this proposition.

Aliazzz

sagatowski commented 5 years ago

This was exactly the pattern I was thinking of for this issue. Stefan Henneken has written an excellent blog post regarding this design pattern: https://stefanhenneken.wordpress.com/2014/11/16/iec-61131-6-abstract-factory-english/

Which is exactly what I planned to use for this issue.

Although considering that this library will be included as a library in other peoples code, and there seem to be an allergy (for good reasons) against dynamic memory allocation, I'm all in for the file factory though with pre-allocated memory.

Edit: Please post an update here when you have done an implementation of the Abstract Factory pattern via method B and then I can hopefully use your code!

Aliazzzz commented 5 years ago

I integrated an XML writer into TcUnit. I think it should suffice to produce files like this;

<?xml version="1.0" encoding="UTF-8" ?> 
   <testsuites id="20140612_170519" name="New_configuration (14/06/12 17:05:19)" tests="225" failures="1262" time="0.001">
      <testsuite id="codereview.cobol.analysisProvider" name="COBOL Code Review" tests="45" failures="17" time="0.001">
         <testcase id="codereview.cobol.rules.ProgramIdRule" name="Use a program name that matches the source file name" time="0.001">
            <failure message="PROGRAM.cbl:2 Use a program name that matches the source file name" type="WARNING">
WARNING: Use a program name that matches the source file name
Category: COBOL Code Review – Naming Conventions
File: /project/PROGRAM.cbl
Line: 2
      </failure>
    </testcase>
  </testsuite>
</testsuites>
Aliazzzz commented 5 years ago

Hi,

Just been searching for the right file markup as I cannot validate it easily; It seems the following primer is very handy! We just need to verify this.

https://stackoverflow.com/questions/4922867/what-is-the-junit-xml-format-specification-that-hudson-supports

<?xml version="1.0" encoding="UTF-8"?>
<testsuites disabled="" errors="" failures="" name="" tests="" time="">
    <testsuite disabled="" errors="" failures="" hostname="" id=""
               name="" package="" skipped="" tests="" time="" timestamp="">
        <properties>
            <property name="" value=""/>
        </properties>
        <testcase assertions="" classname="" name="" status="" time="">
            <skipped/>
            <error message="" type=""/>
            <failure message="" type=""/>
            <system-out/>
            <system-err/>
        </testcase>
        <system-out/>
        <system-err/>
    </testsuite>
</testsuites>

Some of these items can occur multiple times:

  • There can only be one testsuites element, since that’s how XML works, but there can be multiple testsuiteelements within the testsuiteselement.
  • Each propertieselement can have multiple propertychildren.
  • Each testsuiteelement can have multiple testcasechildren.
  • Each testcaseelement can have multiple error, failure, system-out, or system-err children.
Aliazzzz commented 4 years ago

progress so far (small code preview), the 'end' result will be far larger;

(*
    This function block reports the results from the tests into a xUnit compatible XmlFile

*)
FUNCTION_BLOCK FB_XmlFileFormatter IMPLEMENTS I_TestResultFormatter

~~~~

Busy := TRUE;
IF NumberOfFailedTestCases > 0 THEN
    Xml.writeDocumentHeader(Header := '<?xml version=\"1.0\" encoding=\"UTF-8\"?>');
    xml.NewComment('xUnit xml created by TcUnit CI-CD (e.g. Jenkins)');
    xml.NewTag('testsuites');
    FOR SuiteCounter := 1 TO NumberOfTestSuites BY 1 DO
        xml.NewTag('testsuite');
        NumberOfFailedTestsInCurrentSuite :=  GVL_TcUnit.TestSuiteAddresses[SuiteCounter]^.GetNumberOfFailedTests();
        Xml.NewPara('errors', UINT_TO_STRING(NumberOfFailedTestsInCurrentSuite));
        NumberOfTestsInCurrentSuite := GVL_TcUnit.TestSuiteAddresses[SuiteCounter]^.GetNumberOfTests();
        Xml.NewPara('tests', UINT_TO_STRING(NumberOfTestsInCurrentSuite) );
        FOR TestCounter := 1 TO NumberOfTestsInCurrentSuite BY 1 DO
            IF GVL_TcUnit.TestSuiteAddresses[SuiteCounter]^.Tests[TestCounter].IsFailed() THEN
                  ;
            END_IF
        END_FOR
    END_FOR
END_IF
Busy := FALSE;

Current log

<Entry>| ======================================</Entry>
<Entry>| Failed tests : 92</Entry>
<Entry >| Successful tests: 112</Entry>
<Entry >| Tests: 204</Entry>
<Entry >| Test suites: 17</Entry>
<Entry >| ==========TESTS FINISHED RUNNING==========</Entry>
<Entry >| ======================================</Entry>
<Entry >| write: /XmlControl/output.xml succes</Entry>
<Entry ">| ========== WRITING TEST REPORT ==========</Entry>

This yields the following file when running the verifier project The intermediate result looks very promising :-)

<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<!-- xUnit xml created by CfUnit CI-CD (e.g. Jenkins) -->
<testsuites>
<testsuite errors="22" tests="44"/>
<testsuite errors="2" tests="4"/>
<testsuite errors="1" tests="1"/>
<testsuite errors="0" tests="1"/>
<testsuite errors="38" tests="57"/>
<testsuite errors="1" tests="1"/>
<testsuite errors="21" tests="42"/>
<testsuite errors="1" tests="1"/>
<testsuite errors="1" tests="24"/>
<testsuite errors="1" tests="1"/>
<testsuite errors="3" tests="3"/>
<testsuite errors="1" tests="1"/>
<testsuite errors="0" tests="3"/>
<testsuite errors="0" tests="0"/>
<testsuite errors="0" tests="1"/>
<testsuite errors="0" tests="20"/>
<testsuite errors="0" tests="0"/>
</testsuites>

PS I really dont care for beautifying the xml with tabs etc, as long as the parser accepts the file.

sagatowski commented 4 years ago

Note to self, I'm using this template: https://wiki.yoctoproject.org/wiki/QA/xUnit_XML_Template that I will be using in the TcUnit-Runner + Jenkins plugin. I've marked and selected the XML that the first version will support in the attached file. testing

Aliazzzz commented 4 years ago

This is an update on the much awaited testresult.xml output;

The output blow represents the main xml skeleton. It just misses necessary data from the testsresults in a single spot from which this file can be produced. The library for creating these files is flexible and modular. Any arbitrary xml tag or parameter can be added as long as we have the accompanying data from the test. Also, the filepath, filename and maximum filesize (=buffersize reserved in plc memory to create it) can be set via the library parameters.

Waiting for the update of TcUnit for "centralised" results to fill in the data-gaps #68

<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<testsuites>
<testsuite name="PrimitiveTypes" errors="22" tests="44"/>
<testcase name="Test_ANY_Differ_DataType"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_BOOL_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_BYTE_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_DATE_AND_TIME_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_DATE_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_DINT_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_DWORD_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_INT_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_LINT_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_LREAL_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_LTIME_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_LWORD_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_SINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_STRING_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_TIME_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_TIME_OF_DAY_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_UDINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_UINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ULINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_USINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_WORD_Differ"/>
<failure message="">Assertion failed</failure>
<testsuite name="AssertTrueFalse" errors="2" tests="4"/>
<testcase name="AssertThatINTsAreEqual"/>
<failure message="INTs are equal">Assertion failed</failure>
<testcase name="AssertThatWORDsAreEqual"/>
<failure message="">Assertion failed</failure>
<testsuite name="AssertEveryFailedTestTwice" errors="1" tests="1"/>
<testcase name="TwiceAssertCall"/>
<failure message="Not equal ANY">Assertion failed</failure>
<testsuite name="CreateFourTestsWithSameName" errors="0" tests="1"/>
<testsuite name="ArrayPrimitiveTypes" errors="38" tests="57"/>
<testcase name="Test_BOOL_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_BOOL_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_BYTE_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_BYTE_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_DINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_DINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_DWORD_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_DWORD_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_INT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_INT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LREAL_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LREAL_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LREAL_Array2d_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LREAL_Array2d_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LREAL_Array3d_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LREAL_Array3d_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LWORD_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_LWORD_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Array2d_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Array2d_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Array3d_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_REAL_Array3d_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_SINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_SINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_UDINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_UDINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_UINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_UINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ULINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ULINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_USINT_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_USINT_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_WORD_Array_DifferInSize"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_WORD_Array_DifferInContent"/>
<failure message="">Assertion failed</failure>
<testsuite name="CreateDisabledTest" errors="1" tests="1"/>
<testcase name="TestEnabled"/>
<failure message="A does not equal B">Assertion failed</failure>
<testsuite name="AnyPrimitiveTypes" errors="21" tests="42"/>
<testcase name="Test_ANY_BYTE_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_BOOL_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_DATE_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_DATE_AND_TIME_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_DINT_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_DWORD_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_INT_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_LINT_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_LREAL_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_LTIME_Differ"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Test_ANY_LWORD_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_REAL_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_SINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_STRING_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_TIME_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_TIME_OF_DAY_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_UDINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_UINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_ULINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_USINT_Differ"/>
<failure message="">Assertion failed</failure>
<testcase name="Test_ANY_WORD_Differ"/>
<failure message="">Assertion failed</failure>
<testsuite name="AssertEveryFailedTestTwiceArrayVersion" errors="1" tests="1"/>
<testcase name="TwiceAssertCall_Arrays"/>
<failure message="">Assertion failed</failure>
<testsuite name="AnyToUnionValue" errors="1" tests="24"/>
<testcase name="Test_STRING"/>
<failure message="">Assertion failed</failure>
<testsuite name="MultipleAssertWithSameParametersInSameCycleWithSameTest" errors="1" tests="1"/>
<testcase name="Assert_SeveralTimes"/>
<failure message="Values differ">Assertion failed</failure>
<testsuite name="MultipleAssertWithSameParametersInDifferentCyclesButWithDifferentTests" errors="3" tests="3"/>
<testcase name="Assert_SeveralTimes"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Assert_SeveralTimesAgain"/>
<failure message="Values differ">Assertion failed</failure>
<testcase name="Assert_SeveralTimesAgainAgain"/>
<failure message="Values differ">Assertion failed</failure>
<testsuite name="MultipleAssertWithSameParametersInDifferentCyclesAndInSameTest" errors="1" tests="1"/>
<testcase name="Assert_SeveralTimes"/>
<failure message="Values differ">Assertion failed</failure>
<testsuite name="AdjustAssertFailureMessageToMax252CharLengthTest" errors="0" tests="3"/>
<testsuite name="EmptyTestSuite" errors="0" tests="0"/>
<testsuite name="CheckIfSpecificTestIsFinished" errors="0" tests="1"/>
<testsuite name="WriteProtectedFunctions" errors="0" tests="20"/>
<testsuite name="TestFinishedNamed" errors="0" tests="0"/>
</testsuites>
fedepell commented 4 years ago

Just minor comments (comparing with a google test XML generated):

<?xml version=\"1.0\" encoding=\"UTF-8\"?> Are these quotes escapes really needed? (gtest doesn't use them)

<testsuites> Google test also adds a summary, which is then used by Jenkins (or other tools): <testsuites tests="6" failures="0" disabled="0" errors="0" timestamp="2019-12-20T11:48:18" time="0" name="AllTests"> Would that be easy to add? (the counters for sure, maybe not the time, and the name of the test container so if you have many you can divide better when displaying)

Gtest also adds a few more things for each test, not sure how vital but some may be trivial to add: <testcase name="send_ESTelemetric" status="run" time="0" classname="ESCommandTest" /> I suppose the status would be trivial at least.

On the errors: <testsuite name="MultipleAssertWithSameParametersInSameCycleWithSameTest" errors="1" tests="1"/> Google test seem to use failure here: <testsuite name="ESCommandTest" tests="6" failures="0" disabled="0" errors="0" time="0"> Digging more I found this differentiation here:

Both failure and error in JUnit tests indicate an undesired situation, but their semantics are different. Failures notify of an invalid test result, errors indicate an unexpected test execution.

So I would say failure should be the right case?

sagatowski commented 4 years ago

@fedepell Agree on the comments, though I don't understand the purpose of the "Status". I thought the purpose of writing the file/reporting the results was only once the tests were finished running?

sagatowski commented 4 years ago

@Aliazzzz Could you synchronize so that the output of your writer is equal to the TcUnit-Runner version (according to QA/xUnit XML Template in my previous comment above)?

fedepell commented 4 years ago

@sagatowski indeed looking at the schema is not there, maybe just G being overdescriptive :) My guts feeling is that you may get for example also aborted or disabled in that line. (and indeed aborted is also one of the values defined here but disabled is not)

Aliazzzz commented 4 years ago

@sagatowski , Yes, synchronizing the output with the schema has always been the intention. As stated: "Any arbitrary xml tag or parameter can be added as long as we have the accompanying data from the test.

Aliazzzz commented 4 years ago
<?xml version="1.0" encoding="UTF-8" ?>
<!-- from https://svn.jenkins-ci.org/trunk/hudson/dtkit/dtkit-format/dtkit-junit-model/src/main/resources/com/thalesgroup/dtkit/junit/model/xsd/junit-4.xsd -->
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">

    <xs:element name="failure">
        <xs:complexType mixed="true">
            <xs:attribute name="type" type="xs:string" use="optional"/>
            <xs:attribute name="message" type="xs:string" use="optional"/>
        </xs:complexType>
    </xs:element>

    <xs:element name="error">
        <xs:complexType mixed="true">
            <xs:attribute name="type" type="xs:string" use="optional"/>
            <xs:attribute name="message" type="xs:string" use="optional"/>
        </xs:complexType>
    </xs:element>

    <xs:element name="properties">
        <xs:complexType>
            <xs:sequence>
                <xs:element ref="property" maxOccurs="unbounded"/>
            </xs:sequence>
        </xs:complexType>
    </xs:element>

    <xs:element name="property">
        <xs:complexType>
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="value" type="xs:string" use="required"/>
        </xs:complexType>
    </xs:element>

    <xs:element name="skipped" type="xs:string"/>
    <xs:element name="system-err" type="xs:string"/>
    <xs:element name="system-out" type="xs:string"/>

    <xs:element name="testcase">
        <xs:complexType>
            <xs:sequence>
                <xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
                <xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
            </xs:sequence>
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="assertions" type="xs:string" use="optional"/>
            <xs:attribute name="time" type="xs:string" use="optional"/>
            <xs:attribute name="classname" type="xs:string" use="optional"/>
            <xs:attribute name="status" type="xs:string" use="optional"/>
        </xs:complexType>
    </xs:element>

    <xs:element name="testsuite">
        <xs:complexType>
            <xs:sequence>
                <xs:element ref="properties" minOccurs="0" maxOccurs="1"/>
                <xs:element ref="testcase" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="system-out" minOccurs="0" maxOccurs="1"/>
                <xs:element ref="system-err" minOccurs="0" maxOccurs="1"/>
            </xs:sequence>
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="tests" type="xs:string" use="required"/>
            <xs:attribute name="failures" type="xs:string" use="optional"/>
            <xs:attribute name="errors" type="xs:string" use="optional"/>
            <xs:attribute name="time" type="xs:string" use="optional"/>
            <xs:attribute name="disabled" type="xs:string" use="optional"/>
            <xs:attribute name="skipped" type="xs:string" use="optional"/>
            <xs:attribute name="timestamp" type="xs:string" use="optional"/>
            <xs:attribute name="hostname" type="xs:string" use="optional"/>
            <xs:attribute name="id" type="xs:string" use="optional"/>
            <xs:attribute name="package" type="xs:string" use="optional"/>
        </xs:complexType>
    </xs:element>

    <xs:element name="testsuites">
        <xs:complexType>
            <xs:sequence>
                <xs:element ref="testsuite" minOccurs="0" maxOccurs="unbounded"/>
            </xs:sequence>
            <xs:attribute name="name" type="xs:string" use="optional"/>
            <xs:attribute name="time" type="xs:string" use="optional"/>
            <xs:attribute name="tests" type="xs:string" use="optional"/>
            <xs:attribute name="failures" type="xs:string" use="optional"/>
            <xs:attribute name="disabled" type="xs:string" use="optional"/>
            <xs:attribute name="errors" type="xs:string" use="optional"/>
        </xs:complexType>
    </xs:element>

</xs:schema>
sagatowski commented 4 years ago

Guess the source of the last comment is this XSD: https://github.com/junit-team/junit5/blob/master/platform-tests/src/test/resources/jenkins-junit.xsd

(putting it there as a reminder so I can use it myself when verifying the functionality of TcUnit-Runner)

Aliazzzz commented 4 years ago

Excellent, we should adhere to the v5 xsd, as mine is v4.. I expect it to be backward-compatible, but I'll test it asap anyway.

To be continued

sagatowski commented 4 years ago

The XML-creator part of TcUnit-Runner is finished and working. It produced the following artifact based on the TcUnit-Verifier project:

testresult_tcunitrunner.xml.txt

I verified it against the following XSD: https://github.com/junit-team/junit5/blob/master/platform-tests/src/test/resources/jenkins-junit.xsd

And it's all fine

SoftwareQualityLabPirklbauer commented 4 years ago

Hi Since I am really interested in this feature, I have a question to you. I have seen, that this feature will be implemented in the next release 1.1.0.0.

Is there a guide how the XML-creator will be used in TwinCat (FB, Function, ...)?

Best regards

sagatowski commented 4 years ago

Hi!

The xUnit XML-file creation is done by a program (still under writing/documentation, will be released as open source as well) that does the integration of TwinCAT/TcUnit into a CI/CD tool (Jenkins). This program is called "TcUnit-Runner", and is executed by Jenkins and does:

In a future release (after 1.1) of TcUnit it will also be possible to create the XML-file directly in TcUnit, though the supported way of doing this in 1.1 will be through the TcUnit-Runner

SoftwareQualityLabPirklbauer commented 4 years ago

Thanks for your quick response

Sounds great! So, I am really looking forward to 1.1

sagatowski commented 4 years ago

@SoftwareQualityLabPirklbauer N.P! Other than the software there will be plenty of documentation released together with 1.1, and unfortunately this is what's taking most of the time now :-)

Aliazzzz commented 4 years ago

I made a PR for the new JUnitXmlPublisher which is embedded within TcUnit. The code is written in such a way that it is easily extensible if needed.

Edit: It seems the current PR needs to be revised.... To be continued!

DominicOram commented 4 years ago

@sagatowski This is great! your description of TcUnit-Runner actually matches something we've been working on internally for testing. I have a couple of questions/suggestions about it:

  1. I assume it's based on the Automation Interface? In which case it's C# based?
  2. Compile that specific task/program in the project I was actually compiling all code as if the project fails to compile that's an error we should let Jenkins know about
  3. We have additional tests we perform where we run up a task and confirm that we can communicate with the PLC using ADS from a different piece of software. Essentially the job of the test runner here is to start a task and then stop it when I say I'm happy.

Are use cases 2 and 3 something you would be happy being in TcUnit-Runner as options? If so we'd be happy to do the implementation ourselves.

sagatowski commented 4 years ago

@sagatowski This is great! your description of TcUnit-Runner actually matches something we've been working on internally for testing. I have a couple of questions/suggestions about it:

  1. I assume it's based on the Automation Interface? In which case it's C# based?
  2. Compile that specific task/program in the project I was actually compiling all code as if the project fails to compile that's an error we should let Jenkins know about
  3. We have additional tests we perform where we run up a task and confirm that we can communicate with the PLC using ADS from a different piece of software. Essentially the job of the test runner here is to start a task and then stop it when I say I'm happy.

Are use cases 2 and 3 something you would be happy being in TcUnit-Runner as options? If so we'd be happy to do the implementation ourselves.

@DominicOram That's great! Does it mean you've basically already done a "TcUnit-Runner" for TcUnit?

To answer your questions:

  1. Correct/correct
  2. Thinking about it, it makes more sense to compile the whole project to see it compiles, and only run the test if it does. Regarding the running it's necessary to run only the TcUnit-task as some users might have test cases in full-blown programs (not libraries) where there are tasks linked to I/Os etc (which will not be able to execute then)
  3. Can you elaborate this further?

You're more than welcome to contribute to the TcUnit-Runner. It's a private repo I've been experimenting with but I'll make sure to make it public on the TcUnit-site quite soon.

DominicOram commented 4 years ago

Does it mean you've basically already done a "TcUnit-Runner" for TcUnit?

Not really, we've only just started looking into tcUnit. Currently my test runner just confirms that everything compiles and then performs the testing described in 3). Adding tcUnit in was the next thing I was going to look at doing. The code for this is in https://github.com/ISISComputingGroup/BeckhoffTestRunner if you're interested.

Regarding the running it's necessary to run only the TcUnit-task as some users might have test cases in full-blown programs (not libraries) where there are tasks linked to I/Os etc (which will not be able to execute then)

Agreed.

Can you elaborate this further?

Happy to elaborate further but this feels like getting well off topic for this issue. Maybe I could add an issue to the TcUnit-Runner repo when it's public to describe the situation in depth? In the mean time there is some documentation here that describes the system. The program described as Automation Tools in that documentation is the one that sounds similar to TcUnit-Runner. The documentation is aimed at our internal developers though so may raise more questions than answers...

sagatowski commented 4 years ago

Does it mean you've basically already done a "TcUnit-Runner" for TcUnit?

Not really, we've only just started looking into tcUnit. Currently my test runner just confirms that everything compiles and then performs the testing described in 3). Adding tcUnit in was the next thing I was going to look at doing. The code for this is in https://github.com/ISISComputingGroup/BeckhoffTestRunner if you're interested.

Regarding the running it's necessary to run only the TcUnit-task as some users might have test cases in full-blown programs (not libraries) where there are tasks linked to I/Os etc (which will not be able to execute then)

Agreed.

Can you elaborate this further?

Happy to elaborate further but this feels like getting well off topic for this issue. Maybe I could add an issue to the TcUnit-Runner repo when it's public to describe the situation in depth? In the mean time there is some documentation here that describes the system. The program described as Automation Tools in that documentation is the one that sounds similar to TcUnit-Runner. The documentation is aimed at our internal developers though so may raise more questions than answers...

@DominicOram The BeckhoffTestRunner seems like a very interesting project. I'll look deeper into it! Just by looking at the "continous integration" picture of the documentation we seem to struggle with the same problems.

I'll make sure to make the TcUnit-Runner repo public as soon as possible. I would highly appreciate feedback/issues for the TcUnit-Runner. The code is "experimental" to say at least (limited error handling, not all VS/TwinCAT versions are handled correctly, etc), so contributions/ideas are more than welcome. I'm happy that you seem to work with the same challenges I've been struggling with, so I'll make sure to ping you here and write the link to the TcUnit-Runner once it's public.

Aliazzzz commented 4 years ago

Update: added a quickfix to enable the JUnitXmlPublisher. I tested it under TwinCAT v4022.30 and now works.

Output path defaults to : "C:\testresults.xml' and can be changed via a parameter.

sagatowski commented 4 years ago

Parts of this functionality is now merged into the main by the PR #91. https://github.com/tcunit/TcUnit/pull/91

sagatowski commented 4 years ago

Finally implemented. To state this was a major task would be an understatement. See: