It seems like only some alerts matching my query get sent to slack whilst many are missed out.
My elastalert rule:
name: service_logs_err
# Type of alert.
# The any rule will match everything. Every hit that the query returns will generate an alert
type: any
# Index to search, wildcard supported
index: fluentd-*
filter:
- query:
query_string:
query: "level:>30 AND _exists_:err.message"
# The alert is used when a match is found
alert:
- "slack"
alert_text: |
Issue on {0} service occurred at {1} with level {2}
{3}
-------------
alert_text_args:
- "hostname"
- "@timestamp"
- "level"
- "err.message"
slack_emoji_override: ":rotating_light:"
slack_webhook_url:
- "_replace_slack_webhook_"
My Elastalert config:
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
minutes: 1
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: _replace_elasticsearch_host_
# The Elasticsearch port
es_port: _replace_elasticsearch_port_
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: _replace_elastalert_index_
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
It seems like only some alerts matching my query get sent to slack whilst many are missed out. My elastalert rule:
My Elastalert config:
Please help.