elastic / logstash

Logstash - transport and process your logs, events, or other data
https://www.elastic.co/products/logstash
Other
75 stars 3.5k forks source link

logstash processors keep dying with "nil?" redirect? #2060

Closed Vehyla closed 9 years ago

Vehyla commented 10 years ago

All of our logs go to a kafka cluster, then we have some logstash processors that push them into elasticsearch. We keep noticing a weird error, then the processor tends to die. The process is up mind you. But the CPU/Load goes to about 1% and there is almost no data going in our out of the ethernet port. I believe elasticsearch is telling the processor to go to a server, but is sending bad or missing data about said server. But that's just my theory.

Some info that will hopefully help shine a lot on the problem.

rpm: logstash-1.4.2-1_2c0f5a1.noarch logstash-kafka-1.2.1-1.noarch

My conf: input { kafka { zk_connect => "zoo01:2111,zoo02:2111,zoo03:2111" group_id => "access" topic_id => "prod.access" consumer_threads => 1 } }

filter { mutate { remove_field => [ "_type", "_id", "_index", "logdate" ] }

if [type] == "apache_access" { ruby { code => 'event["msec"] = event["usec"] / 1000.0 if event["usec"]' remove_field => [ "usec" ] } } }

output { elasticsearch { host => "es-access" port => "9200" protocol => "http" flush_size => '10000' index => "%{type}-%{+YYYY.MM.dd}" cluster => "elastic-access" workers => "4" } }

The error we are seeing: {:timestamp=>"2014-11-11T17:56:01.351000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>117, :exception=>#<NoMethodError: undefined method redirect?' for nil:NilClass>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:336:inexecute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:217:inpost!'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:106:inbulk_ftw'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:80:inbulk'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:315:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:inbuffer_flush'", "org/jruby/RubyHash.java:1339:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:inbuffer_receive'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:311:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:86:inhandle'", "/opt/logstash/lib/logstash/outputs/base.rb:78:inworker_setup'"], :level=>:warn}

Vehyla commented 10 years ago

We switched away from using an Amazon ELB for our elasticsearch aggregator nodes and just did separate output configurations for each node. We haven't seen this issue in about 24 hours. So we are thinking that was our culprit. Can anyone confirm if they have seen the same thing?

suyograo commented 9 years ago

Thanks @Vehyla we have opened https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/13 to track this