Open chrisgodsey opened 5 years ago
Requests that happen in the setup()
or teardown()
functions are tagged appropriately with the ::setup
and ::teardown
(IIRC) values for the group
metric tag. So while you can't filter them out in the simple k6 end-of-test summary, you can do so if you output the metrics to a JSON file or InfluxDB or Load Impact Insights: https://docs.k6.io/docs/results-output
Adding a per-request option that specifies that the HTTP request shouldn't be tracked could be easy, we can just add another parameter here. But this won't scale, since we'll have to add such a parameter to every measurable thing k6 does, like WebSockets, and much more in the future (gRPC, DB connections, etc.). Instead, this might be more universally implemented at the group()
level, probably as a part of this issue or after it: https://github.com/loadimpact/k6/issues/884
I just discovered this behavior and, finding it counter intuitive, I decided to add a brief paragraph to the documentation to make it clear to every user. You can find it here: https://docs.k6.io/docs/test-life-cycle
I'm not going to close this issue yet, to allow people to find it more easily, despite https://github.com/loadimpact/k6/issues/1321 being the issue tracking the potential fix. But for now, it's worth pointing out that there is a partial workaround for a lot of the issues people experience with setup()
and teardown()
, described in the forum threads linked to in https://github.com/loadimpact/k6/issues/1321#issuecomment-682027058 and https://github.com/loadimpact/k6-docs/issues/205
So in using K6 to do some performance testing of our API and in some cases we need to do preconditions in order to ensure our tests are successful (the easiest example is authentication). I'm able to easily add the authentication request in the setup and then later add it into the header of all of my requests.
My issue is that this request (and any other setup/precondition/postcondition requests I want to make) pollute the results of my test. If for some reason one of these requests are particularly expensive or inexpensive, it can make a meaningful difference in my tests and their results.
I'm looking for a way to purposefully direct K6 to not track the results of a certain request when it comes to aggregating the results of my test and judging whether or not it passes my test thresholds.
Immediate ideas that come to mind are like an untrackedGet method that mirrors your get function without calling whatever adds its results to the later evaluated results, so that I could do http.untrackedGet() (And similar versions for your other verbs)
The only other alternative I have is to try to do this in a script external to k6 and then have that script also call k6 passing in the values to the docker container but that seems unnecessarily burdensome.