elastic / logstash

Logstash - transport and process your logs, events, or other data
https://www.elastic.co/products/logstash
Other
14.21k stars 3.5k forks source link

Logstash restarts with: An unexpected error occurred! {:error=>#<NameError: no method 'debug' for arguments #6217

Open magicdude4eva opened 7 years ago

magicdude4eva commented 7 years ago

Running latest version of Logstash and plugins seems to recycle logstash:

[2016-11-08T12:17:11,736][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.3-java/vendor/GeoLite2-City.mmdb"}
[2016-11-08T12:17:11,759][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2016-11-08T12:17:11,761][INFO ][logstash.pipeline        ] Pipeline main started
[2016-11-08T12:17:11,886][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2016-11-08T12:17:24,229][WARN ][logstash.filters.grok    ] Timeout executing grok '%{TRADE_ITEM_LOG}' against field 'message' with value 'Value too large to output (37951 bytes)! First 255 chars are: L/L 08/11/16 10:47:30 tcbobe-17 Solr QueryResponse : {responseHeader={status=0,QTime=6,params={facet.field=[CategoryId, Type, MajorLocationId, ConditionId, VerifiedUser, UserStore, PaymentOption],fl=*,score,start=0,facet.missing=true,fq=[Status:1, Category'!
[2016-11-08T12:17:26,235][WARN ][logstash.filters.grok    ] Timeout executing grok '%{NOTSPACE}/%{NOTSPACE} %{DATESTAMP:timestamp} %{DATA:threadname} oracle.jdbc.driver.OraclePreparedStatementWrapper\@' against field 'message' with value 'Value too large to output (37951 bytes)! First 255 chars are: L/L 08/11/16 10:47:30 tcbobe-17 Solr QueryResponse : {responseHeader={status=0,QTime=6,params={facet.field=[CategoryId, Type, MajorLocationId, ConditionId, VerifiedUser, UserStore, PaymentOption],fl=*,score,start=0,facet.missing=true,fq=[Status:1, Category'!
[2016-11-08T12:17:26,285][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<NameError: no method 'debug' for arguments (org.jruby.RubyArray,org.jruby.RubyHash) on Java::OrgApacheLoggingLog4jCore::Logger
  available overloads:
    (org.apache.logging.log4j.Marker,java.lang.String,java.lang.Object[])
    (java.lang.String,java.lang.Object[])
    (java.lang.String,org.apache.logging.log4j.util.Supplier[])
    (org.apache.logging.log4j.Marker,java.lang.String,org.apache.logging.log4j.util.Supplier[])>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/logging/logger.rb:41:in `debug'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-gelf-3.1.1/lib/logstash/outputs/gelf.rb:190:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:42:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:297:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:296:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:252:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:225:in `start_workers'"]}
[2016-11-08T12:17:42,761][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.3-java/vendor/GeoLite2-City.mmdb"}
[2016-11-08T12:17:42,788][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2016-11-08T12:17:42,790][INFO ][logstash.pipeline        ] Pipeline main started
[2016-11-08T12:17:42,925][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
magicdude4eva commented 7 years ago

The above error is due to Gelf plugin (https://github.com/logstash-plugins/logstash-output-gelf/issues/20) but surely a defunct plugin should not bring down logstash?

jordansissel commented 7 years ago

Reformatted backtrace to make it more readable:

"/usr/share/logstash/logstash-core/lib/logstash/logging/logger.rb:41:in `debug'"
 "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-gelf-3.1.1/lib/logstash/outputs/gelf.rb:190:in `receive'"
 "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'"
 "org/jruby/RubyArray.java:1613:in `each'"
 "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'"
 "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'"
 "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:42:in `multi_receive'"
 "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:297:in `output_batch'"
 "org/jruby/RubyHash.java:1342:in `each'"
 "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:296:in `output_batch'"
 "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:252:in `worker_loop'"
jordansissel commented 7 years ago

@magicdude4eva In this particular case, if Logstash wasn't crashing, it would be retrying on the gelf plugin which would never succeed due to this bug -- so either way (crash or stall) the effect is nearly the same.

jordansissel commented 7 years ago

I agree it shouldn't be crashing, though, it should note the error in the output and retry (because Logstash should never willfully drop messages) the output.