Closed ayrat555 closed 4 years ago
Feedback:
The dasboards are great, but:
we lack information about what "Average test count" means. Average of what? What is being counter? This needs to be somehow reflected and evident in the dasboard (title, description... whatever is available).
there's one instance of the dashboard per test type (transaction tests, deposit tests). Correct? This means, that we're only able to run one instance of the test at the same time? What if we want to run one of these tests towards development and another one towards sandbox? Same goes for Monitors. There's only one instance of the monitor, and the monitor is not being triggered on specific tags (like... where was the monitor triggered, which environment was the test run towards that the monitor was raised).
@boolafish can you please review this pr
@boolafish
one side note: I am not sure how much do we need to chain the tests now (I know it was previously brought up for really long running tests like for days). One note/idea after looking at the PR is that we can actually separate traffic trigger and result checking into 2 spinnaker stages. This would make it flexible on having multiple previous stages that triggers the traffic and the one single last stage that check the metrics
I added the ability to run tests and make assertions separately.
From docs:
Running tests without assertions.
You can run tests without assertions by passing `false` as the last parameter:
STATIX_TAG="env:perf_circleci" mix run -e "LoadTest.TestRunner.run()" -- "transactions" 1 80 false
To just check if there are any events in the given period of time run passing start time and end time:
STATIX_TAG="env:perf_circleci" mix run -e "LoadTest.TestRunner.run()" -- "make_assertions" 1605775276 1605785276
This PR adds the following:
I created two monitors for both tests:
Created monitors:
Notes:
Statix
so they can be distinguished by env.