Open ChrisCooney opened 6 years ago
If you set run_every
and buffer_time
to the same value, you can reason that each query is exactly that size.
eg
run_every:
minutes: 30
buffer_time:
minutes: 30
It will look like: Ran rule from 10:30 to 11:00 ( hits...) Sleeping for 30 minutes Ran rule from 11:00 to 11:30 ( hits...) ...
Obviously you'll have a much higher latency than using a sliding window. You should check out EventWindow, which handles a lot of the difficulties of maintaining a sliding window.
Just re-read the linked issue. If you are trying to compare the size of two different lists, try this:
My understanding of timeframe is it's an aggregation of events - if X number of things happen within timeframe, kick out an alert.
My issue is that I have a custom rule, written in Python, that is working off of the data passed in. Within this custom rule, I've got a tolerance built in. If I don't know the time range of the logs passed into it, the tolerance becomes a more complex affair (proportional to the range of data coming in).
What would be ideal would be if the last 30 minutes of logs were passed into the rule every time, instead of a few minutes each time. I tried to implement
scan_entire_timeframe
, however as the docs suggested, this only provided insight on start up, which wasn't desirable.Is there an existing config parameter for this?