Open ccdta opened 1 year ago
Pinging @elastic/security-solution (Team: SecuritySolution)
Pinging @elastic/security-detections-response (Team:Detections and Resp)
@ccdta We are currently experiencing the same limitation with multiple clients and needing to filter the alerts. Can you possibly just expand on your current work around you are using? Where you are putting the fields etc? As this would be handy as a workaround for us too.
@MilkyEsquire Workaround:
Note: Whenever there is a cluster upgrade. The component template will revert and the workaround has to be applied again.
@ccdta Great thanks for that, worked nicely. We submitted a ticket to Elastic Support around this missing feature a couple of months back also, so hopefully they will add this functionality in!
Elastic team, any updates on this?
@elastic/response-ops is this something you guys could look at?
@yctercero We originally included the data_stream.*
mappings in the ECS component template that the alerts indices reference but ran into issues because the detection rules use the mappings to determine which fields to copy from the source indices to the alert indices. Because the mapping for these are constant_keyword
and the alert indices hold data from multiple different sources, we ran into issues because copying from multiple sources would break the constant_keyword
mapping. We then omitted any fields mapped as constant_keyword
(which ends up just being the data_stream.*
fields). If we want to include them in the ECS component template, the detection rule build alert logic needs to be updated to not copy these fields from the source index to the alerts index, otherwise there will be many errors indexing.
cc @kqualters-elastic
It was also discussed that we could include these fields but change the mapping to keyword
instead of constant_keyword
but diverging from the ECS mapping like this would show up in the data quality dashboard
cc @kqualters-elastic
Thanks for the background @ymao1. Apologies if I missed these convos earlier. I'll bring it up at our Advanced Correlation meeting to see what we want to do here.
cc @paulewing @peluja1012 @marshallmain
@yctercero I was curious if there was any update on this issue?
Any updates?
As an MSSP, we use Elastic to monitor our clients environments. We have 1 use cases where having the data_stream.namespace mapped are quite important to us.
We measure and report on key metrics to each client, like number of alerts, number of closed alerts, number of alerts in progress. Unless I hack the index template, and replace it every upgrade, I can't separate this data amongst data_stream.namespace.
With our setup there's no reason to duplicate the SIEM rules for other spaces, I'm assuming that might be the other option. But adding the mapping to the index template works. Having an Alerts@Custom index component template would work. Seems to be an easy to implement solution.
Other uses for this could be filtering the alerts console for a specific client, this would especially be useful during an active cybersecurity event, where other clients who are physically separated from the client under attack are a lesser priority, and their alerts would only serve to confuse the incident response team.
This could also be helpful in workload balance amongst SOC analyst. As they are sometimes assigned to a client and not to the whole repository.
I'm not familiar enough with the deployment code and process from Github out to the production environments. But, at a bare minimum, and maybe as an additional request, deploying an Alerts@Custom index component template would allow adding mapping for this and other fields. And while you have your code cracked open, adding an Alerts@custom ingest processor would allow us to add an enrichment processor to our Alerts Index. I use this one to get the Max vulnerability priority rating for the host to the alert index. It adds an additional level of risk knowledge for the analyst.
So we found another workaround for this that prevents you from having to modify a component template and re-index. This also means you don't have to apply the fix every time you upgrade the cluster. You can filter on 'kibana.alert.ancestors.index : *-NAMESPACE-*' (replace NAMESPACE with your namespace obviously). We have verified that this works.
Describe the bug: the data_stream.namspace field cannot be filtered on .internal.alerts-security.alerts-default indices
Kibana/Elasticsearch Stack version: 8.7.0 Server OS version:
Browser and Browser OS versions:
Elastic Endpoint version:
Original install method (e.g. download page, yum, from source, etc.):
Functional Area (e.g. Endpoint management, timelines, resolver, etc.):
Steps to reproduce:
Current behavior: The data_stream.namespace, dataset, type in the .internal.alerts-security.alerts-default indices are not searchable. They do appear in each security alert documents.
We added these fields as a runtime field in the native .alerts-security.alerts-mappings components template and reindex, which resolved the issue. However, after every cluster upgrade it resets and we have to repeat this process. { "runtime": { "data_stream.namespace": { "type": "keyword" } }
Expected behavior: These fields should be searchable for the alerts index
Screenshots (if relevant):
Errors in browser console (if relevant):
Provide logs and/or server output (if relevant):
Any additional context (logs, chat logs, magical formulas, etc.):