Open tobz opened 2 weeks ago
Run ID: 75da82cf-b1bb-42cb-8e8e-f612161a3ad1
Baseline: 7.55.2 Comparison: 7.55.3
Performance changes are noted in the perf column of each table:
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Run ID: 88e47a21-9e72-443c-8c89-3338872bb552
Baseline: c1acd462d9365f0c1ce55a5a2cc4db053ce91e47 Comparison: 539f1c9440c12d1a3e2daa72a869f91eda042e02
Performance changes are noted in the perf column of each table:
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
❌ | dsd_uds_100mb_3k_contexts_distributions_only | memory utilization | +19.49 | [+19.22, +19.75] | 1 | |
✅ | dsd_uds_1mb_50k_contexts_memlimit | ingress throughput | +13.00 | [+9.82, +16.18] | 1 | |
❌ | dsd_uds_100mb_250k_contexts | ingress throughput | -5.43 | [-5.92, -4.93] | 1 |
experiment | link(s) |
---|---|
dsd_uds_100mb_250k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_100mb_3k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_100mb_3k_contexts_distributions_only | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_10mb_3k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_1mb_3k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_1mb_50k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_1mb_50k_contexts_memlimit | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_500mb_3k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_512kb_3k_contexts | [Profiling (ADP)] [Profiling (DSD)] [SMP Dashboard] |
dsd_uds_50mb_10k_contexts_no_inlining (ADP only) | [Profiling (ADP)] [SMP Dashboard] |
dsd_uds_50mb_10k_contexts_no_inlining_no_allocs (ADP only) | [Profiling (ADP)] [SMP Dashboard] |
Just to jot down some notes here..
The two biggest problems are that what we really want to be able to do is:
We can solve the first problem with Arc<T>
-like semantics, just tracking when no outstanding reference to a context exists (besides our reference in the resolver) and then triggering the removal of that context... but that means while we're very precise about expiration, we actually expire too fast which means we spend gobs of time re-interning because of having to search through the interner.
If we made the interner O(1)
-esque, then this might not be a problem... but doing so would also mean that it would be far less bounded than it currently is.
Likewise, we can trivially solve the second problem by just incrementally iterating over the resolved contexts, with sleeps in between, which isn't so much a true TTL as much as it simply introduces an inherently delay between a context becoming unused and being cleaned up. This, however, means that we either need to use a scheme that allows crawling the list in chunks (which will need locking) or crawling it in full, every time, which is naturally more and more expensive as the number of resolved contexts go up... and still isn't a true TTL.
I was trying to noodle around the idea of how to make the "signal that this context is now unused" bit super cheap, which would allow us to register it somewhere that could then try to do more of a true "has it been unused for more than X seconds?" check... but so far I haven't come up with something sufficiently simple and performant.
Context
Work in progress.