A new connection request is saved into a history bucket, then all buckets are summed up and frang gives a verdict, pass or drop connection. Since both connections are accounted, failed and successful, it may lead to actual block of client.
Imagine, that client sends N connection requests per bucket, i.e. FRANG_FREQ * N connections per evaluated interval. Total limitation is configured as N < M < FRANG_FREQ * (N - 1).
start of communications: history[] is empty
Time period - 1 / FRANG_FREQ - the first bucket is filled with N connection requests. Since N < M all of them will be allowed.
At time period FRANG_FREQ - 1 / FRANG_FREQ resulting sum([history[i] for i in range(N)]) > M and new connections will be dropped
At time period FRANG_FREQ / FRANG_FREQ all incoming connections will be dropped since summing against all the previous buckets not even counting current one is already bigger than M.
At time period FRANG_FREQ + 1 / FRANG_FREQ the very first bucket will be zeroed and reused. But sum of all other buckets is already bigger than M. So again all new connections will be dropped.
Same will happen to every new bucket.
Look closer on what is happening. From some moment all new connections will be dropped, even if there was no passed connections on previous intervals.
The function actually works like pushing flow backwards through a pressure valve - it closes after client reaches some peak throughput, and blocks client until pressure returns back to allowed range. Sounds interesting, but not the initial intention.
One of the ways how this can be fixed - account only successful connections, but not dropped ones.
On the other hand, 'usual' clients that generate traffic under threshold wouldn't notice that.
Scope
https://github.com/tempesta-tech/tempesta/blob/36f7cc65c5a382df2333d0fc69c74325c1c97e7e/fw/http_limits.c#L295-L318
A new connection request is saved into a
history
bucket, then all buckets are summed up and frang gives a verdict, pass or drop connection. Since both connections are accounted, failed and successful, it may lead to actual block of client.Imagine, that client sends
N
connection requests per bucket, i.e.FRANG_FREQ * N
connections per evaluated interval. Total limitation is configured asN < M < FRANG_FREQ * (N - 1)
.history[]
is empty1 / FRANG_FREQ
- the first bucket is filled withN
connection requests. SinceN < M
all of them will be allowed.FRANG_FREQ - 1 / FRANG_FREQ
resultingsum([history[i] for i in range(N)]) > M
and new connections will be droppedFRANG_FREQ / FRANG_FREQ
all incoming connections will be dropped since summing against all the previous buckets not even counting current one is already bigger thanM
.FRANG_FREQ + 1 / FRANG_FREQ
the very first bucket will be zeroed and reused. But sum of all other buckets is already bigger thanM
. So again all new connections will be dropped.Look closer on what is happening. From some moment all new connections will be dropped, even if there was no passed connections on previous intervals.
The function actually works like pushing flow backwards through a pressure valve - it closes after client reaches some peak throughput, and blocks client until pressure returns back to allowed range. Sounds interesting, but not the initial intention.
One of the ways how this can be fixed - account only successful connections, but not dropped ones.
On the other hand, 'usual' clients that generate traffic under threshold wouldn't notice that.