Open 2opremio opened 7 years ago
Hmm, I'm not certain if that would help. The author of wrk2 explains why they chose to measure latency with constant throughput here: https://github.com/giltene/wrk2#acknowledgements
I could generate graphs for several different throughputs, but that would also start measuring general language and runtime library efficiencies (parsing the headers, etc)
But what if in addition to the normal GC work, the program also created N short-lived objects (only for the duration of the request) for some percentage P of requests? That would more accurately mirror what happens in real applications (there are many more temporary objects that get created, not just the long-lived ones). If the GC can't deal with that well, it will also affect the latency...
We are being bitten constantly by Golang's GC. Its performance (i.e. throughput) seems to be the culprit. We think that having a non-generational/non-compacting GC is seriously affecting the performance of our application (our app generates a lot of short-lived objects and uses some immutable datastructures).
It would be good to have a confirmation through an independent throughput test.