Closed kostasb closed 5 years ago
The same error encounters in logstash v.6.2.4 and logstash-jdbc-input using mongodb connection:
[2018-04-26T21:11:40,393][WARN][logstash.inputs.jdbc] Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::OrgLogstas h::MissingConverterException: Missing Converter handling for full class name=java.util.Date, simple name=Date>}
Description: Exactly the same error as the first comment encounters on the versions described.
We are taking information from 5 queues on RabbitMQ, we use basically the same configuration for all of them and we have similar messages on them (same JSON payload) but this happens only on one of them creating a bunch consumers (one per message +1) on this RabbitMQ queue. The queue where the issue appears is the only one that we use for deadletters on RabbitMQ (https://www.rabbitmq.com/dlx.html), so the issue should be related to this. It was working with previous versions of ELK v5.x.x.
We move the messages from the queue that we cannot read to another (new and different queue, consumer, vhost...) and we end up on the same exact situation. Looks like messages with the headers: x-death are causing the issue with the pluging:
Stack trace:
[2018-08-20T16:41:36,337][ERROR][logstash.pipeline] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::RabbitMQ host=>[""], port=>, ssl=>true, vhost=>"", queue=>"", passive=>true, user=>"", password=><password>, metadata_enabled=>true, tags=>["], add_field=>{"indexType"=>""}, id=>"", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_8eb64acd-7840-4a31-ad48-95b45e784451", enable_metric=>true, charset=>"UTF-8">, threads=>1, ssl_version=>"TLSv1.2", automatic_recovery=>true, connect_retry_interval=>1, durable=>false, auto_delete=>false, exclusive=>false, prefetch_count=>256, ack=>true, key=>"logstash", subscription_retry_interval_seconds=>5>
Error: Missing Converter handling for full class name=java.util.Date, simple name=Date
...
org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71)
org.jruby.runtime.Block.call(Block.java:124)
org.jruby.RubyProc.call(RubyProc.java:289)
org.jruby.RubyProc.call(RubyProc.java:246)
org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104)
java.lang.Thread.run(Thread.java:748)
Logstash configurations:
input {
rabbitmq {
host => "098765432"
port => 5671
ssl => true
vhost => "098765432"
queue => "098765432"
passive => true
user => "98765432"
password => "0987654321"
metadata_enabled => true
tags => [ "rabbitmq", "deadletter" ]
add_field => {
"indexType" => "my-index"
}
}
}
filter {
if "rabbitmq" in [tags] {
mutate {
add_field => { "[@metadata][indexType]" => "%{indexType}" }
remove_field => [ "indexType" ]
}
if [@metadata][rabbitmq_properties][timestamp] {
date {
match => ["[@metadata][rabbitmq_properties][timestamp]", "UNIX"]
}
}
if "deadletter" in [tags] {
if [@metadata][rabbitmq_properties][x-death] {
mutate {
add_field => { "rabbitmq_x_death" => "%{[@metadata][rabbitmq_properties][x-death]}" }
}
}
}
}
}
output {
elasticsearch {
hosts => ["098765432"]
index => "%{[@metadata][indexType]}-%{+YYYY.MM.dd}"
}
}
As a temporal workaround, we have disabled metadata and we are adding the field handle as false:
rabbitmq {
host => "098765432"
port => 5671
ssl => true
vhost => "098765432"
queue => "098765432"
passive => true
user => "98765432"
password => "0987654321"
metadata_enabled => false
tags => [ "rabbitmq", "deadletter" ]
add_field => {
"indexType" => "my-index"
"handled" => "false"
}
}
}
My fix in logstash for this issue is released with logstash version 7. Version 7.0.0-alpha2 is now released so it is easily accessible. So I think this issue can be closed.
Resolved in Logstash v7.0 - thank you @msvticket
Description:
When the rabbitmq input encounters a Date field in the metadata the pipeline logs an unrecoverable error and restarts.
It seems that the headers normalization method in the rabbitmq client does not convert the Date class.
Stack trace: