Open sathishdsgithub opened 7 years ago
If i use query_string i see below error message
root@ubuntu:/tmp/elastalert# python -m elastalert.elastalert --verbose --rule sample_rule.yaml INFO:elastalert:Starting up INFO:elastalert:Queried rule Sample Rule from 2017-06-18 08:37 PDT to 2017-06-18 08:43 PDT: 10000 / 10000 hits (scrolling..) ERROR:root:Traceback (most recent call last): File "/tmp/elastalert/elastalert/elastalert.py", line 1010, in run_all_rules num_matches = self.run_rule(rule, endtime, self.starttime) File "/tmp/elastalert/elastalert/elastalert.py", line 767, in run_rule if not self.run_query(rule, rule['starttime'], endtime): File "/tmp/elastalert/elastalert/elastalert.py", line 558, in run_query data = self.get_hits(rule, start, end, index, scroll) File "/tmp/elastalert/elastalert/elastalert.py", line 372, in get_hits hits = self.process_hits(rule, hits) File "/tmp/elastalert/elastalert/elastalert.py", line 289, in process_hits set_es_key(hit['_source'], rule['timestamp_field'], rule'ts_to_dt') File "elastalert/config.py", line 211, in _ts_to_dt_with_format return ts_to_dt_with_format(ts, ts_format=rule['timestamp_format']) File "elastalert/util.py", line 135, in ts_to_dt_with_format dt = datetime.datetime.strptime(timestamp, ts_format) File "/usr/lib/python2.7/_strptime.py", line 332, in _strptime (data_string, format)) ValueError: time data '2017-06-18 15:37:42.805' does not match format '%Y-%m-%d %H:%M:%S.000'
ERROR:root:Uncaught exception running rule Sample Rule: time data '2017-06-18 15:37:42.805' does not match format '%Y-%m-%d %H:%M:%S.000' INFO:elastalert:Rule Sample Rule disabled INFO:elastalert:Sleeping for 58.059853 seconds
No idea how to fix the below error . Please assist
root@ubuntu:/tmp/elastalert# python -m elastalert.elastalert --verbose --rule sample_rule.yaml INFO:elastalert:Starting up WARNING:elasticsearch:GET http://192.168.96.141:9200/graylog_0/_search?_source_include=timestamp%2C%2A&ignore_unavailable=true&scroll=30s&size=10000 [status:400 request:0.007s] ERROR:root:Error running query: TransportError(400, u'search_phase_execution_exception', u'failed to parse date field [2017-06-18 15:50:41.829268] with format [yyyy-MM-dd HH:mm:ss.SSS]') INFO:elastalert:Ran Sample Rule from 2017-06-18 08:50 PDT to 2017-06-18 09:33 PDT: 0 query hits (0 already seen), 0 matches, 0 alerts sent INFO:elastalert:Sleeping for 59.967873 seconds
My rule
name: Sample Rule es_host: 192.168.96.141 es_port: 9200 timestamp_field: timestamp timestamp_type: custom timestamp_format: "%Y-%m-%d %H:%M:%S.000" use_strftime_index: true
writeback_index: elastalert_status
index: graylog_0
type: frequency
num_events: 10 buffer_time: minutes: 60 run_every: seconds: 15 timeframe: days: 15 alert:
@sathishdsgithub it's similar to my issue here: https://github.com/Yelp/elastalert/issues/1165 I had to create a pipeline in Graylog to transform the default timestamps I'm getting from collector into new field and cut the nanoseconds from there...
example of pipeline rule I applied on stream "All Messages" is:
rule "timestamp_duplicate"
when
has_field("timestamp")
then
let new_date = parse_date(to_string($message.timestamp), "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
set_field("short_timestamp", format_date(new_date, "yyyy-MM-dd HH:mm:ss" ));
end
And I have another field, from our apps themselves, which is in another format. So several rules "like" one format, and "percentage_match" rule works only with this format (after pipeline) :D I still haven't figured out why.
hope it helps :)
@Dmitry1987 thanks a lot let me try this and get back to you . I was breaking my head for last four days.
@Dmitry, one quick question which timestamp the elastrule will consider ? The short_timestamp or the regular one "timestamp "?.
.Correct me if I'm wrong should I specifically tell elastrule to use
timestamp_field : short_timestamp?
exactly, this is how I point to different field in every rule now...
you're not alone, we're breaking a head to make it work more than 4 days lol , some stuff already works, some stuff had to be hacked in code (added extra victorops notification fields from matched array, generated custom kibana links that will be appended to mails...).
For now a "spike" type rule works good for me. But "percentage_match" never finds results for its "bucket" (the aggregation to compare with), tried almost all combinations of rule params and trying to make the raw queries it produces to work (in debug log). But I suspect it's our schema and fields issues more than ElastAlert problem.
I think it's worth to submit a PR regarding date issues... I'll try to find a way for it to be 'smarter' regarding dates and hope it'll be merged.
https://github.com/Yelp/elastalert/pull/1022 was just merged it, it will let you fix this timestamp issue.
@Qmando
I have managed to fix the timestamp error and my rule looks something like below. My objective is to display the output on the console if my rule matches any query string. I'm using command for alert "/bin/echo Found something "
############################ Elast Alert rule ##################################
name: Sample Rule
es_host: 192.168.96.141
es_port: 9200
index: graylog_0
#use_strftime_index: true
timestamp_field: timestamp
timestamp_type: custom
timestamp_format: '%Y-%m-%d %H:%M:%S.%f'
timestamp_format_expr: 'ts[:23] + ts[26:]'
#writeback_index: elastalert_status
type: frequency
filter:
- query:
# query_string:
# query: "_type: message"
query_string:
query: "source: 192.168.96.141"
num_events: 10
timeframe:
hours: 1
run_every:
seconds: 15
alert:
- command
# - "debug"
#command: ["/bin/echo Found something !!"]
#################### elasticsearch log sample ################################
{ "_index": "graylog_0", "_type": "message", "_id": "5c59f5a7-557a-11e7-b022-000c298681bd", "_version": 1, "_score": 1, "_source": { "MONTHNUM": "06", "gl2_remote_ip": "192.168.96.141", "gl2_remote_port": 37854, "IPV4": "192.168.96.141", "streams": [ "000000000000000000000001" ], "source": "192.168.96.141", "message": "2017-06-19 22:36:06.828926 IP 192.168.96.1.54050 > 192.168.96.141.42285: Flags [S], seq 2248467626, win 1024, options [mss 1460], length 0", "gl2_source_input": "59317ae66e332b0c4df4f831", "dst_ip": "192.168.96.141", "src_ip": "192.168.96.1", "src_port": "54050", "YEAR": "2017", "payload": ": Flags [S], seq 2248467626, win 1024, options [mss 1460], length 0", "src_dst_ip": "192.168.96.1-192.168.96.141", "dst_port": "42285", "gl2_source_node": "6e3e4964-4439-4cbb-8e49-0baf1a006532", "SECOND": "06.828926", "MONTHDAY": "19", "timestamp": "2017-06-20 05:36:09.522" } }
################### Running Elastalert rule with python ####################
I see only Queried rule Sample run output and not my alert command output . I want to display the queried string in the output . Am i missing something here ?
root@ubuntu:/tmp/elastalert# **python -m elastalert.elastalert --verbose --rule sample_rule.yaml**
INFO:elastalert:Starting up
INFO:elastalert:Queried rule Sample Rule from 2017-06-19 23:08 PDT to 2017-06-19 23:23 PDT: 10000 / 10000 hits (scrolling..)
INFO:elastalert:Queried rule Sample Rule from 2017-06-19 23:08 PDT to 2017-06-19 23:23 PDT: 20000 / 10000 hits (scrolling..)
INFO:elastalert:Queried rule Sample Rule from 2017-06-19 23:08 PDT to 2017-06-19 23:23 PDT: 30000 / 10000 hits (scrolling..)
INFO:elastalert:Queried rule Sample Rule from 2017-06-19 23:08 PDT to 2017-06-19 23:23 PDT: 40000 / 10000 hits (scrolling..)
INFO:elastalert:Queried rule Sample Rule from 2017-06-19 23:08 PDT to 2017-06-19 23:23 PDT: 50000 / 10000 hits (scrolling..)
@sathishdsgithub you need to wait until it finish scrolling, then it shows you all the "alert alert alert ... " messages at once. Try to use the --start param to limit the test to shorter time frame, or set the "max_query_size: 100000" or something :) (because your timeframe is pretty short already)
oh, it's without "query_key" so you should see just 1 notification about alert. I set a query_key in most of my rules, so it later shows for which of the combinations he'll send alerts.
@sathishdsgithub Instead of /bin/echo use
alert:
- debug
this will print the alerts.
Also , you've got a ton of matching documents, you might want to do
use_count_query: true
doc_type: message
This will make it run much, much faster.
Also, you'll need to put run_every into config.yaml rather than the rule yaml.
@Qmando @Dmitry1987
I get the following error post to the email alert? please assist
**ERROR:root:Failed to delete alert AVzOxHoKv_ErwPo1Pzfz at 2017-06-22T07:48:41.798477Z**
INFO:elastalert:Queried rule Port Scan Detection from 2017-06-22 13:13 IST to 2017-06-22 13:18 IST: 150 / 150 hits
INFO:elastalert:Ran Port Scan Detection from 2017-06-22 13:13 IST to 2017-06-22 13:18 IST: 150 query hits (150 already seen), 0 matches, 1 alerts sent
INFO:elastalert:Sleeping for 1.890259 seconds
**2.) Moreover, I get email alert twice? for the same alert. I set the realert as 0 , but still i get the email alert twice for every time the alert matches the condition
Below is the working elastalert rule**
name: Port Scan Detection
es_host: 192.168.96.141
es_port: 9200
index: graylog_0
timestamp_field: timestamp
timestamp_type: custom
timestamp_format: '%Y-%m-%d %H:%M:%S.%f'
timestamp_format_expr: 'ts[:23] + ts[26:]'
doc_type: message
writeback_index: elastalert_status
type: cardinality
query_key: [src_ip, dst_ip]
cardinality_field: dst_port
max_cardinality: 50
timeframe:
seconds: 30
old_query_limit:
seconds: 1
filter:
- query:
query_string:
query: "_type: message AND NOT sackOK"
aggregation:
minutes: 2
alert:
- "email"
smtp_host: smtp.gmail.com
smtp_port: 465
smtp_ssl : true
from_addr: sathyakumar99977@gmail.com
smtp_auth_file: /usr/share/elasticsearch/smtp_auth_file.yml
email:
- "sathyakumar99977@gmail.com"
alert_text: "This is a test email please ignore !!! . ElastAlert has detected suspicious activity .At this time, network port scan has been detected. Do somes thing
about it!"
attach_related: true
include: [message, ip_address, dst_port]
Hi ,
I'm trying to generate alerts using elastalert with graylog elastic search. I have mentioned the details below .
########################Elasticsearch ###########################################
My Elasticsearch log
{ "_index": "graylog_0", "_type": "message", "_id": "85ea8350-5438-11e7-b022-000c298681bd", "_version": 1, "_score": 1, "_source": { "MONTHNUM": "06", "gl2_remote_ip": "192.168.96.141", "gl2_remote_port": 37366, "IPV4": "192.168.96.141", "streams": [ "000000000000000000000001" ], "source": "192.168.96.141", "message": "2017-06-18 08:12:05.703640 IP 192.168.96.1.38755 > 192.168.96.141.48590: Flags [S], seq 4083855875, win 1024, options [mss 1460], length 0", "gl2_source_input": "59317ae66e332b0c4df4f831", "dst_ip": "192.168.96.141", "src_ip": "192.168.96.1", "src_port": "38755", "YEAR": "2017", "payload": ": Flags [S], seq 4083855875, win 1024, options [mss 1460], length 0", "src_dst_ip": "192.168.96.1-192.168.96.141", "dst_port": "48590", "gl2_source_node": "6e3e4964-4439-4cbb-8e49-0baf1a006532", "SECOND": "05.703640", "MONTHDAY": "18", "timestamp": "2017-06-18 15:12:38.667" } } ####################### GRAYLOG ##################################
2017-06-18 08:02:10.495585 IP 192.168.96.1.45229 > 192.168.96.141.35877: Flags [S], seq 1519460217, win 1024, options [mss 1460], length 0
##################### Elastalert rule ##############################
name: Sample Rule es_host: 192.168.96.141 es_port: 9200 timestamp_field: timestamp timestamp_type: custom timestamp_format: "%Y-%m-%d %H:%M:%S.000" use_strftime_index: true writeback_index: elastalert_status index: graylog* filter:
- command
command: ["/bin/echo Found something !!"]
##########################################################################
root@ubuntu:/tmp/elastalert# python -m elastalert.elastalert --verbose --rule sample_rule.yaml INFO:elastalert:Starting up INFO:elastalert:Queried rule Sample Rule from 2017-06-18 08:15 PDT to 2017-06-18 08:17 PDT: 0 / 0 hits INFO:elastalert:Ran Sample Rule from 2017-06-18 08:15 PDT to 2017-06-18 08:17 PDT: 0 query hits (0 already seen), 0 matches, 0 alerts sent INFO:elastalert:Sleeping for 59.96476 seconds
I don't find any alert or message on the console am I missing something here
########################## Elast test rule output #####################
root@ubuntu:/tmp/elastalert# elastalert-test-rule sample_rule.yaml Successfully loaded Sample Rule
WARNING:elasticsearch:GET http://192.168.96.141:9200/graylog*,graylog*/_search?ignore_unavailable=true&size=1 [status:400 request:0.016s] Error running your filter: RequestError(400, u'search_phase_execution_exception', {u'status': 400, u'error': {u'failed_shards': [{u'node': u'6hiGVUp1ROG5fQNd8-NpEA', u'index': u'graylog_0', u'reason': {u'caused_by': {u'reason': u'Parse failure at index [10] of [2017-06-17T15:18:36.836926Z]', u'type': u'illegal_argument_exception'}, u'reason': u'failed to parse date field [2017-06-17T15:18:36.836926Z] with format [yyyy-MM-dd HH:mm:ss.SSS]', u'type': u'parse_exception'}, u'shard': 0}], u'root_cause': [{u'reason': u'failed to parse date field [2017-06-17T15:18:36.836926Z] with format [yyyy-MM-dd HH:mm:ss.SSS]', u'type': u'parse_exception'}], u'grouped': True, u'reason': u'all shards failed', u'phase': u'query_fetch', u'type': u'search_phase_execution_exception'}}) INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent. To send them, use --verbose. INFO:elastalert:Queried rule Sample Rule from 2017-06-17 08:18 PDT to 2017-06-17 09:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 09:18 PDT to 2017-06-17 10:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 10:18 PDT to 2017-06-17 11:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 11:18 PDT to 2017-06-17 12:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 12:18 PDT to 2017-06-17 13:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 13:18 PDT to 2017-06-17 14:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 14:18 PDT to 2017-06-17 15:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 15:18 PDT to 2017-06-17 16:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 16:18 PDT to 2017-06-17 17:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 17:18 PDT to 2017-06-17 18:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 18:18 PDT to 2017-06-17 19:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 19:18 PDT to 2017-06-17 20:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 20:18 PDT to 2017-06-17 21:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 21:18 PDT to 2017-06-17 22:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 22:18 PDT to 2017-06-17 23:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-17 23:18 PDT to 2017-06-18 00:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 00:18 PDT to 2017-06-18 01:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 01:18 PDT to 2017-06-18 02:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 02:18 PDT to 2017-06-18 03:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 03:18 PDT to 2017-06-18 04:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 04:18 PDT to 2017-06-18 05:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 05:18 PDT to 2017-06-18 06:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 06:18 PDT to 2017-06-18 07:18 PDT: 0 / 0 hits INFO:elastalert:Queried rule Sample Rule from 2017-06-18 07:18 PDT to 2017-06-18 08:18 PDT: 0 / 0 hits
Would have written the following documents to writeback index (default is elastalert_status):
elastalert_status - {'hits': 0, 'matches': 0, '@timestamp': datetime.datetime(2017, 6, 18, 15, 18, 37, 42804, tzinfo=tzutc()), 'rule_name': 'Sample Rule', 'starttime': datetime.datetime(2017, 6, 17, 15, 18, 36, 853748, tzinfo=tzutc()), 'endtime': datetime.datetime(2017, 6, 18, 15, 18, 36, 853748, tzinfo=tzutc()), 'time_taken': 0.17697787284851074}
No alerts found
I don't find any alert or message on the console am I missing something here