DataDog / saluki

An experimental toolkit for building telemetry data planes in Rust.
Apache License 2.0
12 stars 2 forks source link

[APR-208] fix: allow pushing multiple data points along within a single metric #217

Closed tobz closed 2 weeks ago

tobz commented 2 weeks ago

Context

In #216, we documented the suboptimal behavior of the aggregate transform when compared to the Datadog Agent. Specifically, the current behavior of the aggregate transform leads to additional metric payloads being sent, and thus more network bandwidth consumed, when compared to the Datadog Agent.

This is suboptimal as metrics traffic could jump by a large amount -- 20 to 40% -- for an identical workload, which is an unacceptable difference, even in this experimental stage.

Solution

This PR introduces a large body of work to effectively bake in the concept of a metric being able to hold multiple timestamp/value pairs in a single Metric, commonly referred to as "data points" in the Datadog Agent and other popular metrics protocols such as OTLP.

In making this change, we can more efficiently shuttle multiple data points from source to transform/destination, and also allow destinations to avoid having to implement their own costly/complex aggregation logic to efficiently forward these metrics.

Most of the work centers around the addition of a new value container, MetricValues, which lives alongside MetricValue, and handles the hard work of ensuring a homogenous set of values, holding their timestamps, merging in values based on timestamp, and all ancillary operations needed to effectively build and utilize MetricValues.

Fixes #216.

pr-commenter[bot] commented 2 weeks ago

Regression Detector (DogStatsD)

Regression Detector Results

Run ID: f9d8b06b-add6-4080-af9b-7ccec9295291

Baseline: 7.55.2 Comparison: 7.55.3

Performance changes are noted in the perf column of each table:

No significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

| perf | experiment | goal | Δ mean % | Δ mean % CI | links | |------|----------------------------------------------|--------------------|----------|----------------|-------| | ➖ | dsd_uds_512kb_3k_contexts | ingress throughput | +0.07 | [+0.00, +0.13] | | | ➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +0.03 | [+0.00, +0.06] | | | ➖ | dsd_uds_100mb_3k_contexts | ingress throughput | +0.02 | [+0.01, +0.03] | | | ➖ | dsd_uds_1mb_50k_contexts | ingress throughput | +0.02 | [-0.01, +0.05] | | | ➖ | dsd_uds_100mb_250k_contexts | ingress throughput | +0.00 | [-0.05, +0.06] | | | ➖ | dsd_uds_500mb_3k_contexts | ingress throughput | +0.00 | [-0.00, +0.01] | | | ➖ | dsd_uds_10mb_3k_contexts | ingress throughput | -0.00 | [-0.04, +0.04] | | | ➖ | dsd_uds_1mb_3k_contexts | ingress throughput | -0.05 | [-0.09, -0.01] | | | ➖ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | -0.45 | [-0.69, -0.22] | |

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI". For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true: 1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look. 2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that *if our statistical model is accurate*, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants. 3. Its configuration does not mark it "erratic".
pr-commenter[bot] commented 2 weeks ago

Regression Detector (Saluki)

Regression Detector Results

Run ID: 892b2b8b-4aea-4fd0-b7ef-1d9a6d128208

Baseline: a5bdd380459d9ed3dd97c4fef6b53e9cb40e1ba8 Comparison: 86dd5d8acb1441833a34107eea98ea3933f4ce70

Performance changes are noted in the perf column of each table:

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI links
dsd_uds_100mb_250k_contexts ingress throughput +6.52 [+5.78, +7.25]
dsd_uds_100mb_3k_contexts_distributions_only memory utilization -7.86 [-7.99, -7.72]

Fine details of change detection per experiment

| perf | experiment | goal | Δ mean % | Δ mean % CI | links | |------|-------------------------------------------------|--------------------|----------|----------------|-------| | ✅ | dsd_uds_100mb_250k_contexts | ingress throughput | +6.52 | [+5.78, +7.25] | | | ➖ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +2.13 | [-1.11, +5.37] | | | ➖ | dsd_uds_500mb_3k_contexts | ingress throughput | +0.66 | [+0.55, +0.76] | | | ➖ | dsd_uds_512kb_3k_contexts | ingress throughput | +0.07 | [+0.00, +0.14] | | | ➖ | dsd_uds_1mb_3k_contexts | ingress throughput | +0.07 | [+0.02, +0.12] | | | ➖ | dsd_uds_50mb_10k_contexts_no_inlining_no_allocs | ingress throughput | +0.01 | [-0.04, +0.05] | | | ➖ | dsd_uds_10mb_3k_contexts | ingress throughput | +0.00 | [-0.05, +0.06] | | | ➖ | dsd_uds_50mb_10k_contexts_no_inlining | ingress throughput | +0.00 | [-0.00, +0.00] | | | ➖ | dsd_uds_1mb_50k_contexts | ingress throughput | -0.00 | [-0.00, +0.00] | | | ➖ | dsd_uds_100mb_3k_contexts | ingress throughput | -0.00 | [-0.01, +0.00] | | | ✅ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | -7.86 | [-7.99, -7.72] | |

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI". For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true: 1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look. 2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that *if our statistical model is accurate*, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants. 3. Its configuration does not mark it "erratic".
pr-commenter[bot] commented 2 weeks ago

Regression Detector Links

Experiment Result Links

experiment link(s)
dsd_uds_100mb_250k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_100mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_100mb_3k_contexts_distributions_only [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_10mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_50k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_1mb_50k_contexts_memlimit [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_500mb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_512kb_3k_contexts [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard]
dsd_uds_50mb_10k_contexts_no_inlining (ADP only) [Profiling (ADP)] [SMP Dashboard]
dsd_uds_50mb_10k_contexts_no_inlining_no_allocs (ADP only) [Profiling (ADP)] [SMP Dashboard]