elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.48k stars 8.04k forks source link

[Security Solution][Detections] Really long messages can break Slack action formatting #92341

Open spong opened 3 years ago

spong commented 3 years ago

When using the {{#context.alerts}} looping to display event information with URLs we will sometimes hit the Slack Message API character limit. When this happens it truncates the message which will often break the formatting in the last message and an unknown number of events won’t be displayed in Slack. For those places using a SOAR solution that relies on slack to automate triage that will break some workflows. In watcher we get around this limitation by using the foreach function to send each event as its own API request so we don’t get truncated events.

e.g.

{{#context.alerts}}
Detection alert for user: {{user.name}}
Detection alert for host: {{host.name}}
{{/context.alerts}}

Slack API on truncating really long messages

Tangential issue with regards to supporting Slack Block Kit: https://github.com/elastic/kibana/issues/88832

Originally reported by @aarju

elasticmachine commented 3 years ago

Pinging @elastic/security-solution (Team: SecuritySolution)

elasticmachine commented 3 years ago

Pinging @elastic/security-detections-response (Team:Detections and Resp)

aarju commented 3 years ago

We've run into similar issues with the webhook action when sending to services that have a max limit to the incoming webhooks. One solution would be if we had a 'foreach' function similar to Watcher's foreach action so we could send a separate webhook or slack message for each event that triggered the alert.

gmmorris commented 3 years ago

In watcher we get around this limitation by using the foreach function to send each event as its own API request so we don’t get truncated events.

For what it's worth - this is true in the Alerting framework as well.

If I'm not mistaken, the reason you're encountering this is that in Security Solution specifically signals get batched into a single alert at framework level (scheduleAction is only called once, instead of once per alert).

This came up in the Q&A session last week around Alerts-as-data, and @mikecote, @jasonrhodes and @tsg are supposed to pick this up to discuss further. I'm assuming, as Observability schedule actions per alert, that Security will be able to use either approach through the rules registry, but I'm not sure how that's going to work.