Implement a memory / size based limit per filter within a bloom object. This is a new module config: bf.bloom-memory-limit-per-filter.
This means every write operation (bloom object creation AND scale outs) that results in a new filter requires the new filter to be within allowed limits. If not, the write request is rejected with an error.
Similarly, any RDB Load operation also performs the same memory / size based limit check. RDB Load will fail if any filter within a bloom object has a size larger than what is allowed.
Switch fp_rate handling through out all occurrences from f32 to f64 to have a higher precision. The main reason for this is during scaling out, every new filter will have a smaller fp_rate (due to the TIGHTENING_RATIO) to maintain the overall fp_rate. We need to handle higher precision fp_rates with every new scale out and f64 handles this better.
defrag callback was updated to use the new config (bf.bloom-memory-limit-per-filter) to exempt defrag operations when over this limit.
bf.bloom-memory-limit-per-filter
.bf.bloom-memory-limit-per-filter
) to exempt defrag operations when over this limit.