Open svanoort opened 8 years ago
Here are some suggestions. Some of these may overlap.
Thank you
@nitrocode Thanks for your feedback!
I'd like it if this can support the following as a testing tool (not for benchmarks):
Suppose, I have 4 filters (as GET
params) in an API endpoint. I want all of them, and their combinations to be tested with their possible values and values which are not allowed.
So if I had filter params:
max_age (integers), name (regex, string), country (regex, string), employed (boolean, true, false)
I want a way to generate all or some combinations of these from a provided list (return value checks are not necessary).
Maybe something like:
- test:
- combinator: {name: 'filter', type:'form', params: {max_age: [10, 20, 30, 40, 50], name: ['Doe', 'Jane']}}
- url: '/api/v1/person.json?$filter'
# `type` can be json/xml/form so it could also be used to generate test data for body
# This would in turn create test values for `$filter` like:
# max_age=10&name=Doe
# max_age=20&name=Doe
# ... and so on ...
This may not always be necessary, because the above example looks like something that a testing suite DB layer itself should handle. But there are other areas where this can be quite helpful (params that have less to do with the database and are some business logic things and may be affected by values of the other). I looked hard in the documentation, but couldn't locate anything that can do this.
Also thank you for the work on this great tool!
@n9986 Thank you for your feedback! I'll look into how something like this could be incorporated.
Thought: perhaps the best way to approach this is as a piece-at-a-time utility from the python side.
This is easy enough to build out: rather than trying to replace unittest + requests in pure-python use, offer up the extensible components as ways to decorate & easy test cases.
For example, validation of response bodies/headers, extract data from HTTP responses, do templating/generation on request bodies, etc.
This is complementary to the effort to provide unittest outputs and better python APIs for working with tests/testsets/etc.
Sorry for taking to long to answer, here it goes. I'm from an outer context, only pyresttest runs on python on my project. Anyway there some features that I would like to see in pyresttest.
@jeanCarloMachado Thank you for your feedback, and feedback at any time is welcome. To give some idea on timelines, the current full roadmap runs about a year out, and I aim for a major release with a couple big changes + several smaller feature additions & bugfixes every 3-6 months.
Between this and the other feedback, I've bumped the priority of xUnit compatible output up several notches, and it is planned for the next release. The dynamic/flexible variable binding got bumped down in priority to allow for this.
Logging is getting some extra love as well with https://github.com/svanoort/pyresttest/pull/171 and I'll factor in ways to improve the signal-to-noise ratio here.
Config files: I've added https://github.com/svanoort/pyresttest/issues/177 to include this. It depends on several planned features in future releases, so it will be a while out, but it's factored into design planning now.
Parallel: yes, I agree this would improve speed a lot. It is a rather complex, multi-step process to get there, but here are the rough design plans. I'm open to ideas or examples if people have good samples that may help:
Happy wall-of-text! :-)
Its been over a year that I saw any activity. @svanoort Are you still actively maintaining this?
@jeanCarloMachado
As you said xUnit compatible output, currently I end up writing my own pyresttest manager in bash which does this;
Better differentiation between error messages and context, the debug flag is too simplistic, so when I got an failure the dump is huge and hard to debug;
A way to run it in parallel batches, I have many tests and they are taking too long for developers to keep waiting to do the merge of their pull requests, probably I'll write a way to run batches on the manager I wrote;
A way to set all the things I mentioned above through a configuration file.
Have you done anything on this for your project *
@rgarg1 last commit to master May 2016 and last commit to refactor-execution April 2017. I don't think this is maintained anymore.
Currently there are a lot of users that are torn between PyRestTest (YAML) and requests/unittest combination (pure Python) for their REST API testing needs.
Story: As a PyRestTest user I would like to have features to make it easier to use PyRestTest within a python environment. I would like to offer a python builder API that is similar to frisby.js for constructing and running tests, something that is in a fluent style but generates test/testset/testconfig objects just like the existing YAML.
There are a couple outstanding issues that would build in this direction:
https://github.com/svanoort/pyresttest/issues/29 - better logging when invoking directly from Python
https://github.com/svanoort/pyresttest/issues/43 - provide xUnit compatible output, preferably by wrapping unittest compatibility.
https://github.com/svanoort/pyresttest/issues/146 - setup/teardown (as part of v2.0)
Current PyRestTest strengths:
Current PyRestTest weaknesses:
Thoughts on what is particular helpful / unhelpful here for your uses? @nitrocode @lerrua @jewzaam (I'm aware you guys were using a legacy fork based off pre 0.x version) @MorrisJobke @jeanCarloMachado
Edit: Scope limits
Unix philosophy: one tool for one task, not One Tool To Rule Them All™ - the goal is to keep PyRestTest focused on what it does well, while letting it grow.