Open at055612 opened 5 days ago
Raised for @p-kimberley
For context, there are two issues here:
maxStoreSize
. Once a volume fills up, ref data writes will fail. Therefore it would be useful to be able to cap ref usage on a per-node basis as well as per-feed/DB.maxStoreSize
and subsequently failing to load until entries expire and are purged.Firstly, I suggest maxStoreSize
be renamed to maxDbSize
. When ref stores were combined (not split into feeds), this property made sense as a combined limit. Now there can be multiple DBs, there should be a separate property governing the size of individual DBs.
I propose two additional settings be created, both of which will cause streams to be purged until the DB(s) are within limits:
stroom.pipeline.referenceData.lmdb.maxStoreSize
. Maximum size of the ref store, encompassing all ref DBs. This will enable a global cap to prevent disk pressure from affecting node health if ref data grows in an uncontrolled manner. If this limit is reached, Stroom should start purging from the largest ref DBs until the aggregate DB size falls below. Maybe purge iteratively in batches, each time from the largest DB.stroom.pipeline.referenceData.lmdb.dbHighWaterMarkPercent
. Once an individual ref DB reaches this percentage of dbMaxSize
, purge streams until its size falls below.
When a ref store reaches
stroom.pipeline.referenceData.lmdb.maxStoreSize
loads will fail. It would be good if there was an additional threshold % property (e.g. 90%) such that prior to a load, if the store size (as a % of the maxStoreSize) is greater than this threshold, then it will purge old streams until below the threshold.