Open nirsegev opened 9 years ago
Hi, PyRestTest offers a couple features to assist with CI integration. One, upon completion, the process returns a status code equal to the number of test failures. Two, you can also set the logging level on command line execution (i.e. --log=ERROR) so it only shows log lines in case of error.
Unfortunately, there is no support for custom output formats yet; perhaps you could assist with defining what you'd expect output to look like? I've considered adding XML/JSON/YAML output but get hung up on defining the output structure. (It will take a considerable amount of effort to implement this, however -- the python logging facility is not designed to do this, so it will require rewriting all logging components.)
Cheers, Sam
Hi all,
@svanoort I guess it will probably be difficult for you to come up with an output structure that would fit the needs of the various clients. Maybe pytest should expose an "interface" for the clients to implement, thus having their own custom format. Of course, one may argue that it's now wise to push this problem to the client, though - some sort of a plugin design may do the trick.
//Marius
Thanks for the response. I will follow up in case I will come up with a solution. Best, Nir
@mariusmagureanu Yes, that is a good idea and along the lines I was thinking -- one notion is to use a listener API that is invoked at each step of the test execution/benchmark pipline. This is somewhat linked to a longer-term goal of refactoring test/benchmark components into a set of composable, pluggable steps.
I'm also toying the idea of offering some sort of Jenkins plugin for integration in the future -- due to job changes, I expect to have a great deal of experience with this in the near future.
Logging output to an x-unit format would help with Jenkins integration without having to write a plugin, it is somewhat of an informal standard for test reporting. My current employer (and several before it) leveraged this formatting, even for non-java projects. Sometimes through test runner output format changes, sometimes just transforming the default output format to something x-unit-like, essentially something sufficient enough for Jenkins' reporting plugins to pick up on.
Here is a JUnit Schema for Apache Ant's JUnit, in case you haven't seen it before:
https://github.com/windyroad/JUnit-Schema
I feel like this format, in particular, will help drive broader adoption of this admittedly cool tool.
This issue post over at TravisCI (https://github.com/travis-ci/travis-ci/issues/239) lists several CI systems that expect this sort of format, presented as evidence for my argument. However, TravisCI doesn't currently support it, from what I can tell with a few minutes of searching. In fact, there is a bounty open for JUnit style build artifacts that are parsed into a simple report: https://www.bountysource.com/issues/848108-handle-ant-s-junit-xml-formatted-output-from-test-spec-runs.
None of this is to say a Jenkins plugin wouldn't be completely awesome, but don't forget about all the other CI systems that would get free reporting integration if you were to just add an xunit-style output file (as ci_reporter, NodeUnit, Jasmine, Clojure.test, PyUnit, SimpleTest, JUnit, NUnit have done). Then you have Hudson/Jenkins, Bamboo, and apparently at some point in the foreseeable future, TravisCI as painless integrators of this framework.
At any rate, to close a long post, we will likely build something internally, either hack at a fork or just transform the output data to a minimally acceptable junit-format for our Jenkins integration, as we have a large and varied body of testing assets spanning languages and testing types that all either already conform to xunit-style outputs or have tooling around them to produce such output from the default output formats.
@parametrization With this in mind, I think this is a pretty reasonable output format, and maps well to what is currently output, to be honest.
I like the suggestion from @mariusmagureanu to do this by providing hooks for different output formats. I think this will be dependent to some extent on changes planned for v2: https://github.com/svanoort/pyresttest/issues/45
Specifically, the more flexible config structure and the stepwise decomposition of test/benchmark execution components.
I'm going to call out up front that this will be a leaky abstraction. PyRestTest supports features that *Unit doesn't really, and vice versa, so we're aiming to provide coverage for the most common case
Here's proposed mapping.
TestSets <--> TestSuites in Junit
TestSet <--> TestSuite in JUnit
ALL macro-level elements: (Test/Benchmark) <--> Test element in JUnit
Configuration/bindings/etc: --> No mapping to output? Just map context vars? Undefined so far.
Undefined: system out, etc?
For each step, they're going to call a method in resttest.py to perform logging/reporting (potentially using configuration variables to define what logs).
I don't have a formal ETA for integration here, because of work commitments, but it's getting bumped up in priority quite a bit (for Jenkins Integration use), and I think is higher priority than implicit templating.
@ksreddy543 As you had asked, there is not an expected ETA behind this since I'm tying to a larger architectural change/extension to avoid having to rework both completely, specificially https://github.com/svanoort/pyresttest/issues/45 .
Since I am heavily committed with employment at CloudBees (limiting free time), and this is an uncompensated, outside-of-working-hours side project, my guess is this is probably some months out.
Adding a note for later use: there's a library to assist with this -- https://pypi.python.org/pypi/junit-xml On Github: https://github.com/kyrus/python-junit-xml
This is officially a v2 feature, tied to the step/macro system: https://github.com/svanoort/pyresttest/issues/92
Alternate option that removes blocking dependence on the major refactors:
Provide unittest-extending wrappers for the testtest, test, and benchmark elements, plus testconfig. Tie into existing unittest JUnit compatible wrappers, such as: https://pypi.python.org/pypi/unittest-xml-reporting
This could be used two ways:
resttest can register as a unittest runner to do its work (with Benchmark.Test objects being special cases of TestCase that it handles specifically).
Optimistically moving a rough (alpha-quality) implementation this to the 1.8.0 milestone because it's in high demand... if it can be readily done using convenient wrappers. Otherwise it'll get pushed back again until rearchitecting supports it better.
This can probably run successfully using a callback-based methodology, and moving execution logic into the test/benchmark libraries. This is also a big step toward PyRestTest v2.
Two callback options:
Hi, I would like to use pyresttest in our CI. is there currently a suuport for xml test results generation? Thanks