In order to keep simulations up-to-date we need a mechanism to record/collect API calls for our user-flow simulations, so that we can find out changes and apply it in simulation classes.
So far I can see several options:
Gatling recorder
node.js script with puppeteer navigating to pages and collecting API calls
we can store baseline as json and compare with the run output.
I could potentially see 2 steps that could be implemented;
Create a separate job that goes through a scenario in the browser and captures the requests to the Kibana server. Apply filtering and compare to a saved set of requests. If the comparison fails, fail the job and that's the indicator that the load testing scenario needs to be updated. Once the load testing scenario(s) are updated, update this saved set of requests. There's a lot of manual steps here 👎
Have the whole thing be completely automated so that the functional_test_runner, or something like it, goes through the UI once to generate the scripts which are then used by Gatling with multiple users.
Ideally we don't do step 1 but just go straight to implementing step 2 if possible.
In order to keep simulations up-to-date we need a mechanism to record/collect API calls for our user-flow simulations, so that we can find out changes and apply it in simulation classes.
So far I can see several options:
we can store baseline as json and compare with the run output.