Open spong opened 3 years ago
Pinging @elastic/security-solution (Team: SecuritySolution)
Pinging @elastic/security-detections-response (Team:Detections and Resp)
We've run into similar issues with the webhook action when sending to services that have a max limit to the incoming webhooks. One solution would be if we had a 'foreach' function similar to Watcher's foreach action so we could send a separate webhook or slack message for each event that triggered the alert.
In watcher we get around this limitation by using the foreach function to send each event as its own API request so we don’t get truncated events.
For what it's worth - this is true in the Alerting framework as well.
If I'm not mistaken, the reason you're encountering this is that in Security Solution specifically signals get batched into a single alert at framework level (scheduleAction
is only called once, instead of once per alert).
This came up in the Q&A session last week around Alerts-as-data, and @mikecote, @jasonrhodes and @tsg are supposed to pick this up to discuss further. I'm assuming, as Observability schedule actions per alert, that Security will be able to use either approach through the rules registry, but I'm not sure how that's going to work.
When using the {{#context.alerts}} looping to display event information with URLs we will sometimes hit the Slack Message API character limit. When this happens it truncates the message which will often break the formatting in the last message and an unknown number of events won’t be displayed in Slack. For those places using a SOAR solution that relies on slack to automate triage that will break some workflows. In watcher we get around this limitation by using the foreach function to send each event as its own API request so we don’t get truncated events.
e.g.
Slack API on truncating really long messages
Tangential issue with regards to supporting Slack Block Kit: https://github.com/elastic/kibana/issues/88832
Originally reported by @aarju