StamusNetworks / SELKS

A Suricata based IDS/IPS/NSM distro
https://www.stamus-networks.com/open-source/#selks
GNU General Public License v3.0
1.48k stars 285 forks source link

Could not dynamically add mapping for field [ET.http.javaclient.vulnerable] #130

Open yorkvik opened 6 years ago

yorkvik commented 6 years ago

Hi,

All alerts are coming in and showing normally, except for rules SID 2019401,2014297,2011582 (related to outdated java clients).

Error in logstash is:

[2018-09-24T11:18:12,750][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.09.24", :_type=>"SuricataIDPS", :_routing=>nil}, 2018-09-24T09:18:12.482Z localhost %{message}], :response=>{"index"=>{"_index"=>"logstash-2018.09.24", "_type"=>"SuricataIDPS", "_id"=>"AWYK3zbQfZgnDbAs3RR9", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Could not dynamically add mapping for field [ET.http.javaclient.vulnerable]. Existing mapping for [vars.flowbits.ET.http.javaclient] must be of type object but found [boolean]."}}}}

Any idea what causes this error and how to remediate this?

Thanks in advance,

pevma commented 6 years ago

It is generated because of a flowbit in a rule (you should be able to find it by grepping that into /etc/suricata/rules/scirius.rules) - so the generate alert event has that field. Can you find a similar alert event log in the Discovery page of Kibana and post a fulls screenshot ?

yorkvik commented 6 years ago

Thanks for the quick response.

Indeed there is a flowbit = ET.http.javaclient.vulnerable set. Cannot find any related incident in Kibana (since it is not in Elasticsearch at all).

Here are the greps from local source files directly, hope it helps?

$ grep 2019401 scirius.rules

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"ET POLICY Vulnerable Java Version 1.8.x Detected"; flow:established,toserver; content:" Java/1.8.0"; http_user_agent; content:!"181"; within:3; http_user_agent; flowbits:set,ET.http.javaclient.vulnerable; threshold: type limit, count 2, seconds 300, track by_src; metadata: former_category POLICY; reference:url,www.oracle.com/technetwork/java/javase/8u-relnotes-2225394.html; classtype:bad-unknown; sid:2019401; rev:24; metadata:affected_product Java, attack_target Client_Endpoint, deployment Perimeter, deployment Internal, signature_severity Audit, created_at 2014_10_15, performance_impact Low, updated_at 2018_07_17;)

$grep ET.http.javaclient.vulnerable eve.json

{"timestamp":"2018-09-24T06:41:50.965374+0200","flow_id":131301093845994,"in_iface":"eno1","event_type":"alert","vlan":1000,"src_ip":"10.0.0.1","src_port":35286,"dest_ip":"10.0.0.2","dest_port":80,"proto":"TCP","tx_id":0,"alert":{"action":"allowed","gid":1,"signature_id":2019401,"rev":24,"signature":"ET POLICY Vulnerable Java Version 1.8.x Detected","category":"Potentially Bad Traffic","severity":2},"http":{"hostname":"localhost","url":"\/index.html","http_user_agent":"Mozilla\/4.0 (Windows Server 2012 R2 6.3) Java\/1.8.0_111","http_content_type":"application\/x-pkcs7-crl","http_method":"GET","protocol":"HTTP\/1.1","status":200,"length":5514},"vars":{"flowbits":{"ET.JavaNotJar":true,"ET.http.javaclient.vulnerable":true,"ET.http.javaclient":true}},"app_proto":"http","flow":{"pkts_toserver":4,"pkts_toclient":9,"bytes_toserver":446,"bytes_toclient":10669,"start":"2018-09-24T06:41:50.961514+0200"}}

pevma commented 6 years ago

Do you have that alert ("signature_id":2019401) in the SN-ALERT dashboards or in Evebox - the flowid - "flow_id":131301093845994

yorkvik commented 6 years ago

No, nothing in those dashboards. The event is not even put in Elasticsearch at all (error is at logstash level).

pevma commented 6 years ago

Could you please double check something - just to be on the safe side - if you open the SN ALERTS dashboard and search with that filter - alert.signature_id:"2019401" Would anything come up ?

yorkvik commented 6 years ago

Checked again, unfortunately nothing found :-(

pevma commented 6 years ago

Can you try updating the indexes. If you got to Kibana's management, Kibana indexes, choose logstash-alert-* index and do a refresh? Would that make things any different ?