Open luoyucumt opened 5 months ago
At 2000 connections I still see a 99% latency of 201.06ms. Is that not good? I makes sense that as the number of connections grows the latency increases as both wrk and fasthttp start to take up more CPU. Did you expect anything else here?
In my production environment, I use fasthttp to make requests to third-party services. During peak traffic times, the fasthttp client experiences some latency, with some delays possibly exceeding several seconds. To investigate, I conducted a stress test and discovered that as the number of connections increases, latency issues arise.
Fasthttp version: v1.55.0
Pressure test environment
Simulating a Third-Party Service with Code:
Code Snippet for Simulating Third-Party Service Calls
Results Obtained Using the Load Testing Tool:
1 connection:
10 connections
50 connections
100 connections
500 connections
1000 connections
1500 connections
2000 connections
As the number of connections increases, it leads to higher latency. However, the third-party service still responds quickly; in this example, the response time is measured in microseconds (µs).
I used flame graphs to help with the analysis. It appears that most of the time is spent on system calls, what can I do to reduce response latency in this situation