ota4j-team / opentest4j

Open Test Alliance for the JVM
Apache License 2.0
279 stars 37 forks source link

Define a standard for test reports #9

Closed alb-i986 closed 2 years ago

alb-i986 commented 8 years ago

Could it be within scope?

The goal here is to have a common format which testing frameworks and build tools can understand, so that, for example, JUnit (producer) can write the test report in this common format, and then Jenkins (consumer) can read it and display in its web UI in HTML or whatever other form is desired.

Something like maven-surefire's XSD.

Instead of XML, it could be JSON.

Ideally it should also provide an extension point in order to be able to attach user-defined content to a testcase element. For example, in case of UI tests, I think it would be nice if there was a common understanding of the concept of screenshot in case of failure (which Selenium can output as base64).

What do you think?

marcphilipp commented 8 years ago

I think that makes sense. However, I'm not sure how to get all the stakeholders to join in. A well-drafted proposal can't hurt, though. πŸ˜‰

sbrannen commented 8 years ago

I also think it's a good idea, and the OTA would be an ideal place to host such a standard.

And I agree with @marcphilipp: this will require a well thought out (with extensibility and flexibility in mind) and well documented draft.

With that in mind, feel free to propose such a draft!

baev commented 8 years ago

Allure Framework has its own format that supports attachments, steps and a bit more. We have a lot of adaptors that store test results in our format:

and so on. Basically our format extends surefire's one and adds a few more attributes. For example it replaces duration with start and stop timestamps so you can easily calculate test duration during reporting phase. Also you are able to calculate the entire Test Run duration as well (and build cool time diagrams 2016-05-26 17 17 20 that shows you which tests were run in parallel πŸ˜„). The main idea of the whole schema is to add more info to surefire's format, and remove redundant attributes that can be calculated during report phase.

But there are some problems in our format: unable to save information about test fixtures, inner suites etc. Some frameworks have a different grouping, not a Test Suite -> Test Case but Feature -> Story -> Test Case, so it would be nice to support such case etc.

For now we are going to create a new version of Allure Schema to fix such issues. It would be nice to cooperate with you and create an Open test results format.

marcphilipp commented 8 years ago

It would be great to have such a common format! Do you have a proposal already that we can use as a starting point for the discussion?

baev commented 8 years ago

Do you have a proposal already that we can use as a starting point for the discussion?

https://github.com/allurefw/allure2-model/blob/master/src/main/resources/xsd/allure2_model.xsd

sbrannen commented 8 years ago

FYI: I've opened a similar discussion for the JUnit Platform as well: https://github.com/junit-team/junit5/issues/373

mmichaelis commented 8 years ago

Such a standard would not only be interesting for Java. I know of other testing frameworks as for example for JavaScript, which exactly try to reproduce the JUnit4-format so especially CIs like Jenkins/Hudson are able to interpret the results. This might also vote at least for an alternative format in JSON which JavaScript can easily deal with.

kcooney commented 7 years ago

One problem in general with using XML for reporting test results is that XML isn't well suited for streaming results out (so it couldn't be used to update the JUnit UI in Eclipse). You might want to consider a format that first prints out the test plan, then sends out updates as tests are run. That might be tricky if you support test cases that are generated after the first test has run.

leonard84 commented 7 years ago

I also think that the OTA would be a good place to host such a standard. Supporting JSON as first class alternative to XML would further improve adoption by other frameworks. The OTA could provide converters so that consumers could decide how they would like to parse the results.

Furthermore I agree with @kcooney that the format should be able to support streaming. The Spock Framework uses it's own JSON format for its reporting extension. IMO you don't need to print the testplan first, the receiver just needs to be able to handle status updates and update its results based on that. See https://gist.github.com/leonard84/da60eb029b0cfd40882be799c8ba6864 for an example.

baev commented 7 years ago

BTW I can post an example of Allure 2 format as well:

Sample results

https://gist.github.com/baev/9a06b3907309307ef848d93a07696240

Overview

The main idea is that format should represent test execution tree and be as simple as possible. Only one property is required, so you can start with simple JSON like this:

{
  "name": "Passed test"
}

then you can add test result status (one of failed, broken, passed, skipped, missed status processed as unknown), test start/stop, parameters (list of key/value pairs), labels (some test metadata, list of key/value pairs), full name (useful to locate a test in IDE or something) etc

Test execution tree

You can add an uuid to result and make a link to it.

first-result.json:

{
   "uuid": "first",
   "name": "First test" 
}

second-result.json:

{
   "uuid": "second",
   "name": "Second test" 
}

container.json

{
   "name": "Before method fixtures",
   "children": [ "first", "second" ],
   "before": [
    {
      "name": "Set up method"
     }
   ]
}

This adds Set up method preparation fixture to both tests. It is possible to link from container to container (results is leaf nodes of the tree, containers - the other)

Also you can add steps and attachments to results and fixtures as well.

Attachments

Attachment is the link to any file (logs, screenshots etc)

...
"attachments": [
    {
      "name": "Test log",
      "source": "7ab275e9-82db-4b59-b330-ffde8c655fe1-attachment",
      "type": "text/plain"
    }
  ]

Steps

An atomic test logic execution unit. Each test case can consist of one or more steps. Steps allow to divide complex test cases into smaller pieces of logic. This can dramatically simplify analysis of test results because we can determine which part of a test case caused the overall test case failure

 "steps": [
   {
      "name": "Click button \"Log On\"",
      "status": "passed",
      "stage": "finished",
      "start": 1494857026131,
      "stop": 1494857026132,
      "steps": [
        {
          "name": "Open HomePage",
          "status": "passed",
          "start": 1494857026132,
          "stop": 1494857026132,
          "parameters": [
            {
              "name": "name",
              "value": "HomePage"
            }
          ]
        }
      ]
    }
 ]

More data

Also we are thinking of adding more files with information about the run such as testplan.json that contains all the tests in test plan and environment.json that contains information about current run environment.

Runtime support

That is not a problem to process each file separately

mmichaelis commented 7 years ago

I especially like the steps-report. This fits to our requirements in UI-tests, which is to easily understand what was wrong and (in case of a failure) to have a readable body of a test to manually try to reproduce the failure. Example of our test-scenarios: SokaHH: Testen von Rich-Web-UI (German) (based on Gherkin)

If it is possible, it would even help to have all skipped steps (behind a failed step) reported, which is for example the behavior of Cucumber and JBehave.

The JSON report provides great support for integration- and system-tests, as for example having the attachments you can easily add as much context information as required (even a complete database dump), which is especially helpful for Works on my machine pattern.

marcphilipp commented 7 years ago

I think the streaming requirement is crucial. So, the format should not be nested but rely on processing tools to create a tree out of nodes linked using uuid. We could use a JSON streaming format, e.g. line-delimited JSON (cf. https://en.wikipedia.org/wiki/JSON_Streaming). Thoughts?

leonard84 commented 7 years ago

IMHO it makes sense to define both a streaming and a storage/archive format. Due to its nature the streaming format has redundant information and is more complicated to analyze.

Using line-delimited json for streaming sounds good, it is similar to what Spock does at the moment. It uses a simple TCP protocol: (<length-as-string>'\n'<json>'\n')* so its a hybrid of line-delimited and length-prefixed json according to https://en.wikipedia.org/wiki/JSON_Streaming. I've updated the gist https://gist.github.com/leonard84/da60eb029b0cfd40882be799c8ba6864 with an example for the streaming output.

Switching from names to uuids for merging would decrease the size a bit, however the fact remains that it would be harder to analyze and not every client would need streaming input.

That is why I think that having a tree structure for the archive format reduces redundancies and simplifies analysis/visualization of the results.

kcooney commented 7 years ago

@leonard84 I actually do think that the streaming format should emit an initial test plan first. The major IDEs show the entire test tree while the test is running, and update it when passing and failing tests as they run. If they want to use the streaming format to do that, the streaming format would need to emit the test plan first.

leonard84 commented 7 years ago

@kcooney I think that the streaming format should be able to support this usecase, I would just say that it can't be a requirement for frameworks to do so since the consumer needs to be able to handle dynamic tests anyway. So it is just a bit more information up front. And I don't think that we need anything special, the framework just emits all tests it finds with name and status and leaves all other fields empty (e.g., startTime, ...).

sebastianbergmann commented 6 years ago

I am interested in this and hope that it will make sense to adapt for PHPUnit (for reasons that are somewhat described at https://github.com/sebastianbergmann/phpunit/issues/2964).

Tibor17 commented 5 years ago

What features are missing in Surefire XML? We have to keep that and JUnit5 can create its own report which is similar situation in TestNG. For instance Jenkins plugins rely on Surefire XML and therefore this fast to stay. Sometime we added more features or fixes.

leonard84 commented 5 years ago

@Tibor17 I can think of

A json representation was discuessed to make it easier to display it with JavaScript clients, of course a matching XML representation could be defined as well.

Tibor17 commented 5 years ago

@leonard84 Streaming is question of system which transfers test status and not the format. And the unique id is useless because people are people and they will always read human readable and clear HTML report like in CI and they want to match it with sources. The JXR plugin together with surefire report plugin builds the report. I don't say it is a modern and beautiful report but there are tools with better visualization but they are based on the XML report.

baev commented 5 years ago

BTW in Allure we are now implementing realtime report feature using directory watcher. As soon as result file is written to the file system, Allure will process it and update the report. In case you run tests in distributed environment (such as AWS) it much easier to work with separate files than with JSON streaming.

But having a brand new format that fits current needs would be nice. As @leonard84 mentioned XML is a pain for non-java users (JavaScript, Python etc)

Also at the moment there are lot of dialects of JUnit-style XML report and it is hard to parse it correctly.

Not all the languages has classes, not all the frameworks has test suites (hello, BDD). Current format has lack of attachments, test parameters, unique test id, test start/stop timings, executors information (hostname is present, but it would be nice to have thread id), key/value information for test, tests nesting and so on

leonard84 commented 5 years ago

And the unique id is useless because people are people and they will always read human readable and clear HTML report like in CI and they want to match it with sources.

Sure, but there is the difference between a readable name with spaces and other characters and a technical uniqueId which can be used to locate the test. Many frameworks use templated names to make names more readable, but currently this destroys the correct attribution to the source location. The idea is not to remove the TestName, but add a uniqueId as the primary key.

Streaming is question of system which transfers test status and not the format.

I don't agree, the idea is to define a streaming format and an archive format. With rules how to transform the streaming to the archive. This allows to the test platform to provide a common interface, that can be used by different consumers. The Streaming format could use TCP but it could also write files with an increasing counter in the name which can be consumed.

As @baev pointed out, there are several versions of the XML-Style and if a tool adds additional fields they can be rejected if a consumer uses XSD validation. The json format should be implemented in a lenient fashion, ie., simply ignoring unknown attributes.

Tibor17 commented 5 years ago

Notice that maven-surefire's XSD is old format and the new format contains a fix maven-surefire's 3 XSD.

marcphilipp commented 5 years ago

@Tibor17 Thanks for sharing that! IIUC the new format does not allow for reporting nested structures such as parameterized or nested tests. Do you consider this new format "final" (i.e. are there already downstream tools that parse it) or would it still be possible to change it?

Tibor17 commented 5 years ago

Marc, all depends on requirements. We can add more xml elements but it has to take some time until it is proved by practical use, and this format must not have any release candidates or milestones which change the outcome over and over again.

On Sat, Apr 13, 2019 at 1:06 PM Marc Philipp notifications@github.com wrote:

@Tibor17 https://github.com/Tibor17 Thanks for sharing that! IIUC the new format does not allow for reporting nested structures such as parameterized or nested tests. Do you consider this new format "final" (i.e. are there already downstream tools that parse it) or would it still be possible to change it?

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ota4j-team/opentest4j/issues/9#issuecomment-482799533, or mute the thread https://github.com/notifications/unsubscribe-auth/AA_yR_1CNdBeGhGMIHVm14-wU67kGDamks5vgbodgaJpZM4IbSXV .

-- Cheers Tibor

marcphilipp commented 5 years ago

Sure, I agree. This is probably also the main reason why this issue has not seen any progress for such a long time. I plan to work on it once JUnit 5.5 is out but if anyone wants to get started on a proposal that would be great.

Tibor17 commented 5 years ago

@marcphilipp In 1 we talked about format too, and we said that the name of container is not in the report. Pls correct me if I am wrong. I want to announce that the XML knows groups which can be probably used to denote a container name on the top of each testcase. These groups are used by TestNG.

Maybe the guys from TestNG, @juherr and @krmahadevan, can tell us what's behind the terminology of groups in report entries. (see ITestResult.getMethod().getGroups()) Is it a parent of a test class?

marcphilipp commented 5 years ago

In 1 we talked about format too, and we said that the name of container is not in the report. Pls correct me if I am wrong.

Where did we say that?

I want to announce that the XML knows groups which can be probably used to denote a container name on the top of each testcase. These groups are used by TestNG.

AFAIK TestNG's groups are more like JUnit Platform's tags. A container in JUnit Platform lingo is sth. that contains tests and/or other containers, i.e. it can be nested.

baev commented 5 years ago

For TestNG People usually using TestSuite/Test/TestClass as containers and want to see nested structure in the report.

From reporting side I can say that there is actually no need of nesting, you can add a label (as Allure did) and then group results by labels.

The main problem of having nesting structure is that you need to wait until container is finished in order to write results. For example, for TestNG there is usually only one TestSuite, so all the results would be stored in single file which is bad for realtime results processing.

marcphilipp commented 5 years ago

Agreed, I think for streaming separate start/stop events and references to parents by id would be more useful instead of actually nesting elements in the file.

Tibor17 commented 5 years ago

Maybe you know that I am implementing extensions in Surefire. One is called statelessReport which is a typical XML and thus stateless. We are reworking internals to transmit events and I believe that later a stateful report would be a new extension which would persist the event on the fly. I have quite a lots of work to rework Surefire to get to stateful report. To get there I have to fix several things where we implemented problematic code and it takes really a lot of time to rework legacy code keeping it backwards compatible and still extensible at the same time. You understand what I want to tell you; the potentional exists maybe with a new structure or maybe with the old report structure, but I would appreciate a having a list of attributes covered by the future report XML/YAML. I think we should design the report together, step by step (mapping attibutes with HTML and with annottaions) and involve all committers from the frameworks in this project. Let's start with writing asciidoc exploring all these XML attributes/elements matching HTML sections in the report and code annotations for better understanding in all community.

kcooney commented 5 years ago

Having a nested structure can help with reporting. It can allow you to get timing on the test suite test class and test case level, which can give clues as to where the time is being spent.

marcphilipp commented 5 years ago

I think we should design the report together

I totally agree and will share a document in the next few days.

Tibor17 commented 5 years ago

@marcphilipp I meant inviting people from frameworks to become committers in this project. But first of all, more important than writing final document (how) is to write (what) information and features are required by these frameworks. Then we will see if we are able to make some intersection in our consesnus to write one format of report, or we have to write multiple distinct reports and formats.

JLLeitschuh commented 5 years ago

One thing I want to put out there from a security standpoint.

Something that I've been finding all over the industry when it comes to XML based standards is the number of specs, standards, and parsers all vulnerable to XML External Entity (XXE) Processing.

These tools were either vulnerable to these attacks via maliciously crafted configuration files, or completely legitimate XML files that simply loaded their schema validation DTD over an HTTP connection making them vulnerable via a MITM attack.

Here are some of the examples of where I've found this vulnerability:

Project CVE Links
Checkstyle https://nvd.nist.gov/vuln/detail/CVE-2019-9658
PMD https://nvd.nist.gov/vuln/detail/CVE-2019-7722
Diffplug Spotless https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9843

The ask is to ensure that any standard only ever serves schemas over HTTPS connections if necessary, and ideally, a reference parser for these standards should be audited to ensure that it itself isn't vulnerable to XXE.

Alternatively, JSON doesn't seem to have a similar vulnerability, so if no XML standard is created, this may be a non-issue.

mfriedenhagen commented 5 years ago

JSON does not have this attack vector as there is no parsing standard at all ☺️. But I agree with using https is nowadays a must.

mickaelistria commented 5 years ago

Any chance such effort of defining a standard for test reports extends to

  1. non-JVM languages, and
  2. a "protocol" to stream progress?

IMO, such a standard need to fulfill the 2 requirements above to be sustainably profitable.

The Debug Adapter Protocol for instance has managed to capture the main abstractions of all languages regarding threads, lines and so on. I imagine the same is possible to capture a test execution flow and result. About the protocol to stream progress, it could simply be sending the work-in-progress report (including a "not run yet" state for tests) to registered listeners as test are running.

Tibor17 commented 5 years ago

Mickael, The reports are used to be oly statistics result. They are not used for inventing a scifi. I understand this activity as a reason to solve JUnit5 problems where display name could not be placed in the current ANT-based standard (see Surefire doc), however other things questions regarding names of JUnit5 container can be placed in "group" in the current standard.

We should, first of all, answer these questions:

Second, answer which report type you need to have:

The first report can be accomplished by XML/JSON but the second approach would require to use YAML. I think there is no question which one is better and which one would be used, because there are other preferences by the end tools, and both are needed in different usecases. The only issue is to agree on a format that would be practical.

Cheers Tibor17

On Wed, Apr 24, 2019 at 9:21 AM Mickael Istria notifications@github.com wrote:

Any chance such effort of defining a standard for test reports extends to

  1. non-JVM languages, and
  2. a "protocol" to stream progress?

IMO, such a standard need to fulfill the 2 requirements above to be sustainably profitable.

The Debug Adapter Protocol for instance has managed to capture the main abstractions of all languages regarding threads, lines and so on. I imagine the same is possible to capture a test execution flow and result. About the protocol to stream progress, it could simply be sending the work-in-progress report (including a "not run yet" state for tests) to registered listeners as test are running.

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ota4j-team/opentest4j/issues/9#issuecomment-486100960, or mute the thread https://github.com/notifications/unsubscribe-auth/AAH7ER77RVKQHS55PF5CIGLPSADBPANCNFSM4CDNEXKQ .

-- Cheers Tibor

mindplay-dk commented 5 years ago

fwiw, I'm not generally fond of XML, but just want to point out why it may be the most appropriate thing for something like this: it streams, by design, with a SAX-style parser being available on practically any platform.

JSON is somewhat of a different story - yes, streaming JSON parsers exist, but they are generally designed for ingestion only (e.g. internally in a JSON decoder) and rarely, if ever, seen or used in userland.

A valid JSON document represent a single value, e.g. a single array, object, string, number, boolean or null value, and the APIs are therefore typically synchronous with one input / one output.

I've seen a few protocols that use JSON for e.g. message passing or queues by merely using a simple superset of the JSON format, e.g. a series of object values, which enables streaming of the individual objects - while this approach might work, a userland streaming parser would be required.

Can YAML stream? My guess is it's as unusual as a streaming JSON parser. Likewise, this is possible with a superset of course - TAP comes to mind. Again, this requires a custom parser.

XML on the other hand streams naturally, with parsers available anywhere, so... Just my two cents, but XML seems like the only natural choice for a reporting format that needs to stream.

(As an aside: I've been experimenting with an in-process means of streaming results via an interface-based messaging protocol based on JUnit here - I don't have much experience using this in practice yet, so I can't really say if I'm happy with the idea or not. A listener that produces XML would be needed in any case, to enable streaming between processes and, of course, to files, so...)

mkobit commented 5 years ago

To throw one more consideration into the ring, consider using Protocol Buffers as the IDL format. I know this could introduce a dependency but it may be worth looking at.

For example, bazel seems to have definitions in test_status.proto but those test cases are represented by a tree

message TestCase {
  enum Type {
    TEST_CASE = 0;
    TEST_SUITE = 1;
    TEST_DECORATOR = 2;
    UNKNOWN = 3;
  }

  enum Status {
    PASSED = 0;
    FAILED = 1;
    ERROR = 2;
  }

  repeated TestCase child = 1;
  optional string name = 2;
  optional string class_name = 3;
  optional int64 run_duration_millis = 4;
  optional string result = 5;
  optional Type type = 6;
  optional Status status = 7;
  optional bool run = 8 [default = true];
};
mindplay-dk commented 5 years ago

To throw one more consideration into the ring, consider using Protocol Buffers as the IDL format.

@mkobit isn't this a serialization format as well? I think this falls in the same category as JSON - you could use it to serialize individual messages/packets, but you'd still need to define how the individual packets are going to stream.

One advantage of XML is that a schema can define constraints of the stream - e.g. not just what the individual messages/packets look like, but also the order in which they're allowed/required to occur, e.g. by (deeply) nesting items into sections. If you build a stream layer based on (or on top of) JSON or protocol buffers (or another data serialization format) you have to move beyond a formal (run-time executable) specification to an informal (written english) specification.

kcooney commented 5 years ago

A few points

While XML can be read with a streaming parser, it cannot be used to stream data from one process to another (which is what we would want if we wanted to drive an IDE UI from the results).

Bazel as a protocol buffer format for representing the output of a test action, but the test processes that are spawned by Bazel write the results to a file in XML format (I believe using JUnit-Ant format).

Protocol buffers have a serialization format. Each message starts with the length in bytes of the message, so it could in theory be used to stream data via a file. I think we could use protocol buffers for streaming, but Rasmus is correct that we would need human-readable specification, because you need to know the type of a message in order to parse it. Agreed that using protocol buffers would increase the binary size of the writer and the reader.

I'm starting to think that what we want to drive an IDE UI would necessarily be very different than what we would want from a reporting format. The former is essentially a socket protocol, while the latter is a static representation of what occurred. And although you could produce a report from the data streamed from a socket, if every client that wanted a report needed to do that, that would put a lot of burden on clients. Based on that, I think we should have a separate issue tracking a socket protocol, and narrow our discussions here to just a reporting format.

If we only need a reporting format, I think XML is the obvious choice.

sormuras commented 5 years ago

Good points, Kevin.

I'd like to propose another approach: let's use "the file system" as the main container and handler for "the reporting format": re-use basic, platform- and language agnostic, existing tools and optimized solutions that support hierarchical structures and multi-threading by default without re-inventing the wheel.

"Use the file system, Luke"

A (JUnit Platform) test run can be rendered as a tree on the console:

.
+-- JUnit Jupiter
| +-- SharedResourcesDemo
| | +-- canSetCustomPropertyToBanana()
| | +-- customPropertyIsNotSetByDefault()
| | '-- canSetCustomPropertyToApple()
| +-- DynamicTestsDemo
| | +-- dynamicTestsWithContainers()
...
|
'-- JUnit Vintage
  '-- JUnit4Tests
    '-- standardJUnit4Test

Basic outline

While a test is being executed

After a test finished execution

The resulting directory tree could look this. Simple timestamps are shown as Tn, with T0 being the first instant and T100 being the last one:

Z:\TEST-REPORT-8522361051863520261
β”‚   test.plan.execution.begin.txt [T0]
β”‚   test.plan.execution.end.txt [T100]
β”‚
β”œβ”€β”€β”€[engine~junit-jupiter]
β”‚   β”‚   test.execution.begin.txt [T1]
β”‚   β”‚   test.execution.end.txt [T9]
β”‚   β”‚   test.status.SUCCESSFUL.txt [T8]
β”‚   β”‚
β”‚   β”œβ”€β”€β”€[class~example.AssertionsDemo]
β”‚   β”‚   β”‚   test.execution.begin.txt [T4]
β”‚   β”‚   β”‚   test.execution.end.txt [T7]
β”‚   β”‚   β”‚   test.status.FAILED.txt [T6]
β”‚   β”‚   β”‚   screenshot-4711.png [T5]
...
β”‚
└───[engine~junit-vintage]
    β”‚   test.execution.begin.txt [T1]
    β”‚   test.execution.end.txt [T50]
    β”‚   test.status.SUCCESSFUL.txt [T49]
    β”‚
    └───[runner~example.JUnit4Tests]
        β”‚   test.execution.begin.txt [T11]
        β”‚   test.execution.end.txt [T21]
        β”‚   test.status.SUCCESSFUL.txt [T19]
        β”‚
        └───[test~standardJUnit4Test(example.JUnit4Tests)]
                test.execution.begin.txt [T14]
                test.execution.end.txt [T13]
                test.status.SUCCESSFUL.txt [T12]

Thoughts

mindplay-dk commented 5 years ago

let's use "the file system" as the main container and handler for "the reporting format"

In my optic, this will add a lot of complexity - for example: traversal of folders and files, re-establishing the order in which the units were generated, and so on.

Opening a single XML file seems much simpler - traversal order is inherent in the tree structure, we're not reinventing the wheel as XML (and schema/validation etc.) is available everywhere, and piping between processes on the command-line (as is common with e.g. TAP) enables (not multi-threading in itself but) parallel execution of chains of tools.

(It's not clear to me why reusing shell primitives to execute multiple tools in a streaming/parallel fashion is so common with TAP and not really common with XML-based tools? Perhaps merely because it's a stated objective for TAP. If the spec makes it a stated objective to build small, independent tools that can be chained via streams/pipes, and the XML schema is designed with this is mind, you might get the same benefits here.)

kcooney commented 5 years ago

@sormuras many teams have a suite of very fast tests. Writing a file to the filesystem for each test method would slow down runs of suites like this, even with an in-memory file system.

I do think it's a good idea to think about how the report can specify the location of files produced by the test (log files, screenshots, videos, etc). I'm not sure if JUnit 5 provides a good way for the test code or test infrastructure to tell the platform about the type and location of output files created during the test. On a previous JUnit4-based project I did this with thread locals and callbacks and it was ugly.

Tibor17 commented 5 years ago

Still have a feeling that people are talking about technologies, rather than an architecture. We should realize that there should be two reports:

After this you would realize that a list of events (step 1) is simple to do (without knowing the technologies), and then the end-tool would resolve these events into the final report. But this final report can be anything, any technological representation of the final report, it depends on a parser.

marcphilipp commented 2 years ago

A new repo has been created that contains a drafts of two new formats and a proof-of-concept implementation at https://github.com/ota4j-team/open-test-reporting. JUnit 5.9 M1 will being writing the event-based format assuming https://github.com/junit-team/junit5/pull/2868 will be merged. Please see the readme of the separate repo for details and feel free to start a discussion or open an issue.

mindplay-dk commented 2 years ago

3 years on, and fwiw, I would more likely shoot for a JSON-stream format - XML is clunky and the world uses JSON now.