Open axw opened 1 year ago
Adding links for posterity.
We should look at how both disk reads and writes grow with both event ingest rate and sampling rate. Rate of writes is generally proportional to ingest rate, but rate of reads is expected to be proportional to the ingest * sampling rate.
To be more exact, in a multi-apm-server setup, the expectation is that "Rate of writes is generally proportional to local ingest rate", ingest rate local to the apm-server under observation.
On the read side, the expectation is that "rate of reads is proportional to local ingest sampling rate". However, before fix #13464, apm-server suffers from rate of reads proportional to global ingest sampling rate, which means unscalable disk IO and memory usage.
We should compare Badger v2 (in use at the time of writing this) vs. v4 performance (proposed)
Related to #11546
We have a good benchmarking setup for general apm-server ingest performance, but tail-based sampling is a bit of a blind spot. We have done this manually in the past, but we don't have a framework for repeatable testing of TBS.
Once we have a baseline performance established, we should then add to the public documentation. This should include details about disks used, and what kinds of disks are recommended; and expectations about disk and memory usage in relation to ingest rate and sampling rate. Documentation on TBS performance should probably follow on from https://github.com/elastic/apm-server/issues/7842.
We will need https://github.com/elastic/apm-server/issues/7845. Assuming we use
apmbench
, we will need to enable-rewrite-ids
to ensuretrace.id
and per-trace events are not repeated, which would affect TBS.Note to whoever works on this: