Open jcsp opened 1 year ago
Moved to inprogress as opening PRs this week and testing in staging.
Did not yet get to start, but hopeful of this week.
Discussed this just now, trying to summarize.
My open question had been the "advertised resident set size", and what did it mean to not do time threshold based eviction. Latter is easier, and becomes obvious once you look at our redownloaded_at
histogram -- we simply would not do it, and we wouldn't have any of those very fast redownloads.
I pointed out that if we would no longer do threshold based eviction, we might not settle on anything resembling "resident set size" as it would be guided by current EvictionOrder
and frequency of executing the more tame version or the more critical version. For a better estimate @jcsp had been thinking MAX(timelines.logical_size)
with a fudge factor or just the plain synthetic size. This would be used to advertise if we can accept more tenants.
Post-call comparison of synthetic size sums on a single region where there are mostly 12k tenants per pageserver to sum(max(tenant.timelines.logical_size))
with the special pageservers removed gives a range of 1.32 to 3.08, meaning the suggested fudge factor of 2 might work. Removing threshold based eviction does not matter for this rational.
Related to the very fast redownloads, there are two guesses as for the reason which I investigated after the call:
check_availability
operationscheck_availability
operationsThese guesses are sort of related however, a check_availability
might produce WAL (at least it did at one point), so it might cause the synthetic size logical size point to move forward.
One action point, soon recorded in the task list: need to make the logging better however for redownloads, as I only added it as a histogram and these searches are really expensive.
Got to testing #5304 today due to unrelated staging problems. I need to go over the actual results on ps-7.us-east-2.aws.neon.build
.
Assuming the results are sane, the next steps are:
select_victims
refactoring)Post-discussion afterthought: if we do disk usage based eviction before all imitations are complete, should the eviction be Lsn based or random...?
After further discussion with @jcsp and some review of testing results refined the next steps:
Next:
Discussion about pageserver owning the "is good for next tenant" is barely started.
Next step:
PR list has these open:
Next steps:
This week testing out the imitation only policy on staging and deciding if we need to complicate eviction candidate discovery #6224. With imitation only, we will finally run with a high amount of layers all of the time, and disk usage based eviction will run often.
Alternatives to #6224:
Before testing this week:
Extra notes:
New dashy for the metrics added in #6131: https://neonprod.grafana.net/d/adecaputaszcwd/disk-usage-based-eviction?orgId=1 -- so far there has not been any disk usage based evictions on staging.
Work this week:
struct Layer
or disk usage based eviction
collectionLast week:
This week:
split the https://github.com/neondatabase/neon/pull/7030, get reviews through-out the week
Note to self: this about hangs in disk usage based eviction while collecting layers.
Latest troubles in staging have provided good ground for disk usage based eviction runs (pageserver-[01].eu-west-1
), listing the examined outliers after #6131:
2024-03-14T12:04:55.501895Z INFO disk_usage_eviction_task:iteration{iteration_no=1093}: collection took longer than threshold tenant_id=9d984098974b482e25f8b85560f9bba3 shard_id=0000 elapsed_ms=15367
Participated in 9 downloads.
2024-03-14T12:15:44.980494Z INFO disk_usage_eviction_task:iteration{iteration_no=1155}: collection took longer than threshold tenant_id=a992f0c69c3d69b7338586750ba3f9c1 shard_id=0000 elapsed_ms=12523
Participated in 1 download.
2024-03-14T12:18:45.162630Z INFO disk_usage_eviction_task:iteration{iteration_no=1168}: collection took longer than threshold tenant_id=7affec0a9fdf9da5b3638894a84cb9cc shard_id=0000 elapsed_ms=13364
Participated in 1 download.
2024-03-14T12:18:59.848429Z INFO disk_usage_eviction_task:iteration{iteration_no=1168}: collection took longer than threshold tenant_id=a776112dba9d2adbb7a7746b6533125d shard_id=0000 elapsed_ms=10176
Participated in 2 downloads.
2024-03-14T12:19:27.135951Z INFO disk_usage_eviction_task:iteration{iteration_no=1168}: collection took longer than threshold tenant_id=f231e5ac37f956babb1cc98dcfb088ce shard_id=0000 elapsed_ms=17911
Participated in 1 download.
Before and after #7030:
2024-03-21T02:41:07.618648Z INFO disk_usage_eviction_task:iteration{iteration_no=1362}: collection completed elapsed_ms=4969 total_layers=83690
2024-03-21T03:53:43.072165Z INFO disk_usage_eviction_task:iteration{iteration_no=400}: collection completed elapsed_ms=135 total_layers=83695
The set of PRs culminating with #7030 also removed the "10min hang" previously observed. Later more evidence came that it was caused by waiting for a download. For other fixed cases, see: https://github.com/neondatabase/neon/issues/6028#issuecomment-1976443012
pageserver_layer_downloaded_after
metric is still not being used to alert because many cases in staging cause redownloads very soon after evicting. In production, the old mtime-based trashing alert has been downgraded as a warning. It is not known why we get into this situation.
Log analysis is still too time-consuming to spot any patterns. #7030 preliminaries also included fixes for updating this metric. The best guess so far is that we get unlucky with:
However, in the short time between (1) to (2), the PITR could have advanced just enough to warrant new synthetic size calculation, for example.
The utilization endpoint work has just not been started.
This change will improve the pageserver's ability to accomodate larger tenants, especially those with append-heavy work.