The PR optimizes the way samples are aggregated and symbolized:
For large datasets exceeding 16K samples, implement a sparse set to accumulate samples. Due to the "dense" nature of stack trace identifiers, memory usage remains efficient. Additionally, this approach naturally orders the samples by their stack trace identifiers.
Instead of building the resulting tree from scratch for large datasets, utilize the stored parent pointer tree as a prototype. This significantly reduces the number of memory accesses, improving overall efficiency. The downside of this approach is that truncation may become more "invasive" at the leaves: the source tree is built from locations, not functions. Therefore, the "other" stub may include siblings' samples. In practice, the impact is moderate and can be mitigated by increasing the max nodes limit (while still being more efficient: compare 8K and 64K benchmarks). A proper solution to this problem is to store function stack traces in addition to the tree of locations.
Specialize min heap implementation (int64) to avoid allocations caused by the interface conversion.
Benchmark of the SelectMergeByStacktraces query on the real data set that causes performance issues in the current implementation (~1K profiles, ~10K samples each):
before │ after │
sec/op │ sec/op vs base │
3599.9m ± 2% 661.8m ± 1% -81.62% (p=0.000 n=10)
before │ after │
B/op │ B/op vs base │
1054.5Mi ± 1% 999.9Mi ± 3% -5.18% (p=0.000 n=10)
before │ after │
allocs/op │ allocs/op vs base │
494.4k ± 0% 421.2k ± 0% -14.80% (p=0.000 n=10)
Most of the allocations are made in parquet decoding and reconstruction of the symbolic information (locations, functions, mappings, and strings). This is addressed in https://github.com/grafana/pyroscope/pull/3138.
This is quite close to the synthetic benchmarks we have:
Note that the optimisations mostly concern large data sets (significantly bigger than our ResolveTree_Big) and some of them are not used in the case when the max nodes limit is not specified (defaults to 16K), or is too big, as this would be inefficient because of the increased memory consumption. Therefore, e.g., Resolver_ResolveTree_Big/0 (no truncation) does not perform significantly better.
The PR optimizes the way samples are aggregated and symbolized:
int64
) to avoid allocations caused by the interface conversion.Benchmark of the
SelectMergeByStacktraces
query on the real data set that causes performance issues in the current implementation (~1K profiles, ~10K samples each):Most of the allocations are made in parquet decoding and reconstruction of the symbolic information (locations, functions, mappings, and strings). This is addressed in https://github.com/grafana/pyroscope/pull/3138.
This is quite close to the synthetic benchmarks we have:
Note that the optimisations mostly concern large data sets (significantly bigger than our
ResolveTree_Big
) and some of them are not used in the case when themax nodes
limit is not specified (defaults to 16K), or is too big, as this would be inefficient because of the increased memory consumption. Therefore, e.g.,Resolver_ResolveTree_Big/0
(no truncation) does not perform significantly better.