logstash-plugins / logstash-output-elasticsearch_java

Java API Implementation of Elasticsearch Output
Apache License 2.0
0 stars 8 forks source link

Plugin hangs using node protocol when talking to Shield protected ES with message authentication enabled #37

Open joshuar opened 8 years ago

joshuar commented 8 years ago

Tested with LS 2.1.1 and ES 2.1.1.

Description of problem

In an Elasticsearch cluster protected with Shield and where message authentication is enabled, using the node protocol will fail silently.

How to reproduce

Set up an ES cluster with shield enabled and generate a system key to use for message authentication with bin/shield/syskeygen. Use the following simple config for ES:

shield:
  authc:
    anonymous:
      roles: admin, remote_marvel_agent, marvel_user, kibana4-server
      authz_exception: true
  audit:
    enabled: true

Using the following output configuration in Logstash:

output {
    elasticsearch_java {
        network_host => "localhost"
        protocol => "node"
    }
}

Logstash will "hang" when attempting to first install it's template (running LS with --verbose):

log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.PoolingHttpClientConnectionManager).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Settings: Default filter workers: 2
Using mapping template from {:path=>nil, :level=>:info}
Attempting to install template {:manage_template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true, "ignore_above"=>256}}}}}, {"float_fields"=>{"match"=>"*", "match_mapping_type"=>"float", "mapping"=>{"type"=>"float", "doc_values"=>true}}}, {"double_fields"=>{"match"=>"*", "match_mapping_type"=>"double", "mapping"=>{"type"=>"double", "doc_values"=>true}}}, {"byte_fields"=>{"match"=>"*", "match_mapping_type"=>"byte", "mapping"=>{"type"=>"byte", "doc_values"=>true}}}, {"short_fields"=>{"match"=>"*", "match_mapping_type"=>"short", "mapping"=>{"type"=>"short", "doc_values"=>true}}}, {"integer_fields"=>{"match"=>"*", "match_mapping_type"=>"integer", "mapping"=>{"type"=>"integer", "doc_values"=>true}}}, {"long_fields"=>{"match"=>"*", "match_mapping_type"=>"long", "mapping"=>{"type"=>"long", "doc_values"=>true}}}, {"date_fields"=>{"match"=>"*", "match_mapping_type"=>"date", "mapping"=>{"type"=>"date", "doc_values"=>true}}}, {"geo_point_fields"=>{"match"=>"*", "match_mapping_type"=>"geo_point", "mapping"=>{"type"=>"geo_point", "doc_values"=>true}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "doc_values"=>true}, "@version"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip", "doc_values"=>true}, "location"=>{"type"=>"geo_point", "doc_values"=>true}, "latitude"=>{"type"=>"float", "doc_values"=>true}, "longitude"=>{"type"=>"float", "doc_values"=>true}}}}}}}, :level=>:info}

Meanwhile the Shield audit log reports access denied messages as the Logstash "node" does not have the system key available and therefore is not authorised to join the cluster:

[2015-12-30 11:43:52,312] [Ezekiel] [transport] [access_denied] origin_type=[transport], origin_address=[127.0.0.1], principal=[_es_anonymous_user], action=[internal:discovery/zen/unicast]

What should happen

It should be possible to specify the system key in the LS elasticsearch_java output plugin configuration so that LS can correctly join the ES cluster when using the node protocol. This is currently not possible.