As a Mojaloop Hub adopter
I want to know the isolated performance characterisation of the ALS so that I can design an environment that depends on it's output.
Acceptance Criteria:
Infrastructure
Start with everything on the same box.
[x] Verify that a docker compose is created that setups
[x] ~Jmeter~ / k6;
[x] ALS;
[x] Gafana;
[x] Prometheus; + Node exporter
[x] Grafana dashboard for the Node exporter (out of box preferred)
[x] Prometheus need to be configured to pull metrics from the ALS metrics endpoint
[x] Verify that the service to collate the metric data is included
[x] Monitoring
[x] Perf-load test
[x] Test Load Tool WS wait-for-callback based on Trace Header event
[x] Callback Handler provides WS server for Load Tool can subscribe for callback events based on Trace ID + Operation (e.g. PUT /party) + Id (e.g. MSISDN)
[x] K6 example test script subscribes to Callback Handler WS Service to receive callback events based on Trace ID + Operation (e.g. PUT /party) + Id (e.g. MSISDN)
Testing
Don't need to test with a transfer; just test standard Node JS exported metrics.
Overlays
It is preferable to design the docker-compose as overlays.
I.e.
JMeter
Grafana + Prometheus
Note
Depending on how this story plays out, there may be an opportunity to scale out to larger deploy and run more comprehensive tests.
SDK
Consider using the SDK for this, as the SDK would be a more realistic test.
If this is used, then the SDK will too need to be characterised.
SDK will need to be scaled for this to work.
Tasks:
[x] Define approach / docker-compose for monitoring
[x] Define dashboards for standard node/container visualization
[x] Updated documentation for monitoring approach
[x] Define approach / docker-compose for executing perf-load test
[x] Define dashboards for perf-load test visualization
[x] Updated documentation for perf-load test approach
Done
[ ] Acceptance Criteria pass
[ ] Designs are up-to date
[ ] Unit Tests pass
[ ] Integration Tests pass
[ ] Code Style & Coverage meets standards
[ ] Changes made to config (default.json) are broadcast to team and follow-up tasks added to update helm charts and other deployment config.
Goal:
As a
Mojaloop Hub adopterI want to
know the isolated performance characterisation of the ALSso that
I can design an environment that depends on it's output.Acceptance Criteria: Infrastructure Start with everything on the same box.
Testing Don't need to test with a transfer; just test standard Node JS exported metrics.
Overlays It is preferable to design the docker-compose as overlays. I.e.
Note Depending on how this story plays out, there may be an opportunity to scale out to larger deploy and run more comprehensive tests.
SDK Consider using the SDK for this, as the SDK would be a more realistic test. If this is used, then the SDK will too need to be characterised. SDK will need to be scaled for this to work.
Tasks:
Done
Pull Requests:
Follow-up:
Dependencies:
Accountability: