To somewhat validate the recommendations we currently generate, we should actually deploy an application to (recommended) cloud infrastructure and compare the results with the generated predictions.
I suggest we deploy the Cloud Stove Rails app (without frontend to make it simpler), seed the db with some data (maybe test fixtures) and load test it. The load test requests can probably be derived from current integration tests.
For the actual load test, we can:
Deploy a separate benchmark driver with CWB and use JMeter, gatling, locust, or artillery to generate load.
Naturally, we will use CWB to deploy these load tests. Deploying the benchmark driver using CWB with JMeter should already be implemented (since this was part of Christian Davatz's master's thesis).
In addition to the Cloud Stove Rails app, we would ideally also deploy a microservices app like the Socks Shop, but this can probably wait until we have archetypes for microservices.
To somewhat validate the recommendations we currently generate, we should actually deploy an application to (recommended) cloud infrastructure and compare the results with the generated predictions.
I suggest we deploy the Cloud Stove Rails app (without frontend to make it simpler), seed the db with some data (maybe test fixtures) and load test it. The load test requests can probably be derived from current integration tests.
For the actual load test, we can:
A guide for load testing Ruby apps with Ruby-JMeter is available from flood at https://blog.flood.io/load-testing-a-restful-api-with-ruby-jmeter/
Naturally, we will use CWB to deploy these load tests. Deploying the benchmark driver using CWB with JMeter should already be implemented (since this was part of Christian Davatz's master's thesis).
In addition to the Cloud Stove Rails app, we would ideally also deploy a microservices app like the Socks Shop, but this can probably wait until we have archetypes for microservices.