The test environment introduced in BuoyantIO/emojivoto#41, intended to exercising route, telemetry, and service discovery lifecycles in Conduit, includes the following:
conduit control + data plane
1 slow-cooker, 100qps
10 HTTP point-to-point servers
10 gRPC terminus servers, with container restart every 600 requests (~1 minute)
We should extend this environment to exercise additional dimensions, namely:
[x] BuoyantIO/emojivoto#45 M number of point-to-point servers
[x] BuoyantIO/emojivoto#45 N number of terminus servers
[ ] X call-depth length via point-to-point servers
[x] BuoyantIO/emojivoto#45 Y discrete call graphs
[x] BuoyantIO/emojivoto#44 full pod restart, on an interval, rather than container
[x] BuoyantIO/emojivoto#49 Deployment rolling restart, on an interval
[x] BuoyantIO/emojivoto#51 injected vs. baseline performance
[ ] performance at 10qps, 100qps, 1,000qps, 10,000qps
[x] BuoyantIO/emojivoto#51 with and without TLS (depends on runconduit/conduit#1295)
The test environment introduced in BuoyantIO/emojivoto#41, intended to exercising route, telemetry, and service discovery lifecycles in Conduit, includes the following:
We should extend this environment to exercise additional dimensions, namely:
The
Prometheus Benchmark
Grafana dashboard introduced in https://github.com/runconduit/conduit/pull/984 should provide better performance analysis of Prometheus in Conduit.Relates to BuoyantIO/emojivoto#42.
(this issue copied from BuoyantIO/emojivoto#43)