Closed jeremy-cxf closed 3 months ago
If you'd like someone to review syntax, logic. Someone like @ebonura-fastly will be good for that. However all we're really doing (outside the auto generations) is added a few functions, and making sure the float values that configure the timeouts make it to the http client and some validations which I've tested.
I've unit tested it, again for a few various use-cases, and with around 20k requests worth of data. These match up with the requests manually.
Adds the following features:
Adds toggle to disabling catching up, ideally should have done this since the beginning to avoid stale states. When this is turned on, from/until times are always calculated from now - delta. Can also be used to save anything stuck without having to dive into the kv store.
Adds configurable connect/read timeouts to each input for the HTTP client. I've opted to add this per input due to the fact global configuration parameters cannot be validated, for splunk cloud users, that's problematic given the lack of logs. These have limitations of 300 seconds, but should unblock a few customers who were hitting limits sometimes.
Adds some extra configuration around catchup if it is enabled, either to reset to now - delta (default) if the timestamp is older than 24 hours, or exactly 24 hours ago.
Adds the ability to query only attack / anomaly signals for people who catch all request feeds, this was added as a request for a separate build that was provided to a customer, so I've left it in for when they upgrade. It is not recommended to be used, but it can reduce traffic. This is problematic if any new signals are added.
Handles the POST parameter changes to the Feed Endpoint. Pagination is now done via POST parameters rather than query params for the request feed endpoint.
I would prefer to gut any ability to catchup all together, but atleast its an option and it's set as the default behavior now.