Closed Vehyla closed 9 years ago
We switched away from using an Amazon ELB for our elasticsearch aggregator nodes and just did separate output configurations for each node. We haven't seen this issue in about 24 hours. So we are thinking that was our culprit. Can anyone confirm if they have seen the same thing?
Thanks @Vehyla we have opened https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/13 to track this
All of our logs go to a kafka cluster, then we have some logstash processors that push them into elasticsearch. We keep noticing a weird error, then the processor tends to die. The process is up mind you. But the CPU/Load goes to about 1% and there is almost no data going in our out of the ethernet port. I believe elasticsearch is telling the processor to go to a server, but is sending bad or missing data about said server. But that's just my theory.
Some info that will hopefully help shine a lot on the problem.
rpm: logstash-1.4.2-1_2c0f5a1.noarch logstash-kafka-1.2.1-1.noarch
My conf: input { kafka { zk_connect => "zoo01:2111,zoo02:2111,zoo03:2111" group_id => "access" topic_id => "prod.access" consumer_threads => 1 } }
filter { mutate { remove_field => [ "_type", "_id", "_index", "logdate" ] }
if [type] == "apache_access" { ruby { code => 'event["msec"] = event["usec"] / 1000.0 if event["usec"]' remove_field => [ "usec" ] } } }
output { elasticsearch { host => "es-access" port => "9200" protocol => "http" flush_size => '10000' index => "%{type}-%{+YYYY.MM.dd}" cluster => "elastic-access" workers => "4" } }
The error we are seeing: {:timestamp=>"2014-11-11T17:56:01.351000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>117, :exception=>#<NoMethodError: undefined method
redirect?' for nil:NilClass>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:336:in
execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:217:inpost!'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:106:in
bulk_ftw'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:80:inbulk'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:315:in
flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:inbuffer_flush'", "org/jruby/RubyHash.java:1339:in
each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in
buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:inbuffer_receive'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:311:in
receive'", "/opt/logstash/lib/logstash/outputs/base.rb:86:inhandle'", "/opt/logstash/lib/logstash/outputs/base.rb:78:in
worker_setup'"], :level=>:warn}