redbadger / microservice-benchmark

Comparison of microservice performance, resource consumption and startup time using various tech
MIT License
0 stars 1 forks source link

Readme thoughts #4

Open samwhite opened 4 years ago

samwhite commented 4 years ago

We’re focusing on a RESTful HTTP API, which for each incoming request, makes a small number of upstream API requests, collects data from the responses and responds to the original request. The majority of the time taken to respond is waiting for responses from upstream requests. Therefore the performance is IO-bound. We believe this is a fairly typical scenario in microservice architectures in large-scale enterprise systems.

upstream should say downstream (See RFC 2616§1.3: https://tools.ietf.org/html/rfc2616#section-1.3)

Therefore the performance is IO-bound.

I think this is a faulty corollary... do you mean network IO? And what do we mean by performance (concurrency / latency)?

startup time and resource usage until ready to handle requests

This seems highly specific to the specific use case I know you're looking at currently 😄 I think generally this is not an issue, or at least one which k8s/container orchestration should remove.

(the diagram)

node for a simulated back-end (e.g. 2-second latency)

If you're looking to emulate legacy systems, I would make this significantly higher, and add some variance (adding variance here can introduce some interesting behaviours with certain GCs). Would recommend 5-10 seconds with some sort of pseudo-random distribution.

You may also want to discuss how many requests you will be sending in parallel at any one time and the ramp up rate (locust uses 'Number of Users' and 'Hatch Rate' to describe these) - as these can make a notable differrence.

Java, Spring Boot with Servlet API (Tomcat, @Async). Thread pool concurrency model (blocking IO) for incoming and outgoing requests

Are the downstream requests made with the same thread pool or a separate one?

We will also test the performance limits of the legacy back-end to ensure that capacity here is not a problem. However, tech choice is much less relevant here, so we choose Go and Rust, for self-indulgent purposes :-)

Probably not relevant given you'll provide a high amount of resources, but I would rewrite this bit to say that you've picked the highest performances ones currently available, see: https://www.techempower.com/benchmarks/. Drogon or Actix are the most performant, and thus least likely to introduce noise or variance to test results.

StuartHarris commented 4 years ago

Yay! Thanks for your comments @samwhite – much appreciated.