Open ash-darin opened 1 year ago
Please note, this can either not be the same as #14605 or the backport #14752 did not address this correctly . This fails with a different stack trace and still occurs in 7.17.7
The referenced 7.x backport is tagged with 7.17.8, indicating that it is not expected to have been available to the 7.17.7 reported here. Additionally, the stack-trace posted on this issue aligns with the stack-trace on #14599 which is where the issue was first reported. I would advise upgrading to 7.17.8 to consume the fix.
An in-place remediation for <= 7.17.7 would be to backup the DLQ directory, delete any .log
files in the DLQ's data directory that are exactly one byte in size, and then restart the associated pipeline.
Hello,
I'm sorry, I seem to have posted the wrong stack trace. I tested this on 7.17.8 and it fails there too. I have now reproduced this with an old copy of the dead_letter_queue directory instead of the files from #14605 and posted the stacktrace below. 7.17.7 is the state of the test environment that I use as a staging area for my upgrade, that's why I still refered that version as this problem plagues me since late October. Thank you for clearing up btw that a fix is not widely available until the version tag is added to the issue.
Stack Trace from 7.17.8 below
Starting Logstash {"logstash.version"=>"7.17.8", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.17+8 on 11.0.17+8 +indy +jit [linux-x86_64]"}
[2023-02-28T09:08:36,578][ERROR][logstash.javapipeline ][dead_letter_queue_valve][ebd78db2aa8c043ce909e8eeafc66cd3f13aae8cb3f9deba86b361e4b0013eba] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:dead_letter_queue_valve
Plugin: <LogStash::Inputs::DeadLetterQueue pipeline_id=>"oam_filebeat_7x_in", path=>"/usr/share/logstash/data/dead_letter_queue/", add_field=>{"[sl][dlq][pipeline]"=>"oam_filebeat_7x_in"}, id=>"ebd78db2aa8c043ce909e8eeafc66cd3f13aae8cb3f9deba86b361e4b0013eba", commit_offsets=>true, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_eb311f2b-fa0e-4ca3-ad16-79d35f4c503c", enable_metric=>true, charset=>"UTF-8">>
Error: newPosition < 0: (-1 < 0)
Exception: Java::JavaLang::IllegalArgumentException
Stack: java.nio.Buffer.createPositionException(java/nio/Buffer.java:318)
java.nio.Buffer.position(java/nio/Buffer.java:293)
java.nio.ByteBuffer.position(java/nio/ByteBuffer.java:1094)
org.logstash.common.io.RecordIOReader.consumeBlock(org/logstash/common/io/RecordIOReader.java:184)
org.logstash.common.io.RecordIOReader.consumeToStartOfEvent(org/logstash/common/io/RecordIOReader.java:238)
org.logstash.common.io.RecordIOReader.readEvent(org/logstash/common/io/RecordIOReader.java:282)
org.logstash.common.io.DeadLetterQueueReader.pollEntryBytes(org/logstash/common/io/DeadLetterQueueReader.java:144)
org.logstash.common.io.DeadLetterQueueReader.pollEntry(org/logstash/common/io/DeadLetterQueueReader.java:121)
org.logstash.input.DeadLetterQueueInputPlugin.run(org/logstash/input/DeadLetterQueueInputPlugin.java:104)
jdk.internal.reflect.GeneratedMethodAccessor179.invoke(jdk/internal/reflect/GeneratedMethodAccessor179)
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:566)
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:456)
org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:317)
usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_dead_letter_queue_minus_1_dot_1_dot_12.lib.logstash.inputs.dead_letter_queue.run(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-dead_letter_queue-1.1.12/lib/logstash/inputs/dead_letter_queue.rb:74)
usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.inputworker(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:410)
usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_input(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:401)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:318)
java.lang.Thread.run(java/lang/Thread.java:829)
This stack trace runs though this:
org.logstash.common.io.DeadLetterQueueReader.pollEntryBytes(org/logstash/common/io/DeadLetterQueueReader.java:144)
org.logstash.common.io.DeadLetterQueueReader.pollEntry(org/logstash/common/io/DeadLetterQueueReader.java:121)
Which the other stacktrace does not. I can not tell how significant this difference is though.
P.S.: Should I edit the original issue to correct my error and reflect this?
An in-place remediation for <= 7.17.7 would be to backup the DLQ directory, delete any .log files in the DLQ's data directory that are exactly one byte in size, and then restart the associated pipeline.
I got told about this workaround from another person too but as this does not seem to prevent the issue from reocurring I am reluctant to do this. Is there any information if this workaround prevents the problem state from being reached?
A difference to #14605 is that the sincedb file from that ticket points to
/var/lib/logstash/dead_letter_queue/live-cf-filebeat/170033.log
root@test01:~/unpack# hexdump -C sincedb_1dcdb29e71c4b6a0ec8e39a1d1757aa3
00000000 00 31 00 00 00 3f 2f 76 61 72 2f 6c 69 62 2f 6c |.1...?/var/lib/l|
00000010 6f 67 73 74 61 73 68 2f 64 65 61 64 5f 6c 65 74 |ogstash/dead_let|
00000020 74 65 72 5f 71 75 65 75 65 2f 6c 69 76 65 2d 63 |ter_queue/live-c|
00000030 66 2d 66 69 6c 65 62 65 61 74 2f 31 37 30 30 33 |f-filebeat/17003|
00000040 33 2e 6c 6f 67 00 00 00 00 00 00 13 6d 00 00 00 |3.log.......m...|
00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000080
That path is not the suggested /LOGSTASH_HOME/data/dead_letter_queue/test_pipeline
and will most likely not exist on any system where this was reproduced but the error was triggered anyway. The sincedb files on my system do actually point to files existing in my filesystem.
root@test01:/usr/share/logstash/data/plugins/inputs/dead_letter_queue/oam_filebeat_7x_in# hexdump -C .sincedb_78cbb9a3be93df496ba2a807934c836d
00000000 00 31 00 00 00 43 2f 75 73 72 2f 73 68 61 72 65 |.1...C/usr/share|
00000010 2f 6c 6f 67 73 74 61 73 68 2f 64 61 74 61 2f 64 |/logstash/data/d|
00000020 65 61 64 5f 6c 65 74 74 65 72 5f 71 75 65 75 65 |ead_letter_queue|
00000030 2f 6f 61 6d 5f 66 69 6c 65 62 65 61 74 5f 37 78 |/oam_filebeat_7x|
00000040 5f 69 6e 2f 37 2e 6c 6f 67 00 00 00 00 00 00 00 |_in/7.log.......|
00000050 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000084
As I can not simply edit a sincedb file, I can not provide you with a file to reproduce this.
It would be helpful to know under which conditions these empty 1.log / 2.log etc. files are actually created. Can somebody elaborate?
It would be helpful to know under which conditions these empty 1.log / 2.log etc. files are actually created
Empty files shouldn't be created. DLQ creates a <index>.log.tmp
file which is the actual head segment of the DLQ, and only the upstream pipeline can write into it. When it's created, it contains just the version (1 byte). It can grow if some data comes.
When it's finalized, it is renamed removing the .tmp
extension, and is considered valid to be consumed by the downstream pipeline.
If a head segment has received some writes, so it size is greater than 1 byte, it could be finalized if:
dead_letter_queue.max_bytes
default 1MB) *ordead_letter_queue.flush_interval
is expired (default 5 seconds) orMaybe 1 byte segments originated before the PR #12304 fixed with 7.10.0
.
I tried to reproduce your issue, after having created a couple of segments with an upstream pipeline, something simple that write to a closed index
input {
stdin {
codec => json
}
}
output {
elasticsearch {
index => "test_index"
hosts => "http://localhost:9200"
user => "elastic"
password => "secret"
}
}
How to close the index?
POST test_index/_close
I used the sincedb_1dcdb29e71c4b6a0ec8e39a1d1757aa3
present in dlq_sincedb_log.zip, and a pipeline like:
input {
dead_letter_queue {
sincedb_path => "/path/to/sincedb_1dcdb29e71c4b6a0ec8e39a1d1757aa3"
path => "/path/to/logstash/data/dead_letter_queue/"
pipeline_id => "main"
}
}
output {
stdout {}
}
In this case, the path contained in sincedb file is:
/var/lib/logstash/dead_letter_queue/live-cf-filebeat/170033.log
which in local filesystem doesn't exists. So the DLQ reader setCurrentReaderAndPosition("/var/lib/logstash/dead_letter_queue/live-cf-filebeat/170033.log")
if the file doesn't exists brings the first segment:
https://github.com/elastic/logstash/blob/v7.17.8/logstash-core/src/main/java/org/logstash/common/io/DeadLetterQueueReader.java#L166-L169.
The ordering is done by segment id, in this case I think you have 7 segments in your /usr/share/logstash/data/dead_letter_queue/oam_filebeat_7x_in/
, so it checks just which is the segments with an index greater than 170033, none, so it starts polling data from the first segment present in the DLQ folder.
Please could you provide a listing of your DLQ segments?
ls -lart /usr/share/logstash/data/dead_letter_queue/oam_filebeat_7x_in
Also some debug logs for DLQ when this is happening would be helpful to try to understand what's happening.
The difference between the two stack traces, is that the one provided in #14599
java.nio.Buffer.position(java/nio/Buffer.java:293)
java.nio.ByteBuffer.position(java/nio/ByteBuffer.java:1094)
java.nio.ByteBuffer.position(java/nio/ByteBuffer.java:262)
org.logstash.common.io.RecordIOReader.consumeBlock(org/logstash/common/io/RecordIOReader.java:184)
org.logstash.common.io.RecordIOReader.consumeToStartOfEvent(org/logstash/common/io/RecordIOReader.java:238)
org.logstash.common.io.RecordIOReader.readEvent(org/logstash/common/io/RecordIOReader.java:282)
org.logstash.common.io.RecordIOReader.seekToNextEventPosition(org/logstash/common/io/RecordIOReader.java:156)
org.logstash.common.io.DeadLetterQueueReader.seekToNextEvent(org/logstash/common/io/DeadLetterQueueReader.java:87)
org.logstash.input.DeadLetterQueueInputPlugin.setInitialReaderState(org/logstash/input/DeadLetterQueueInputPlugin.java:95)
org.logstash.input.DeadLetterQueueInputPlugin.lazyInitQueueReader(org/logstash/input/DeadLetterQueueInputPlugin.java:74)
org.logstash.input.DeadLetterQueueInputPlugin.register(org/logstash/input/DeadLetterQueueInputPlugin.java:81)
started in DLQ input register
phase, in particular it load events from a timestamp. This happen when the setting start_timestamp
is used.
In this case
java.nio.Buffer.createPositionException(java/nio/Buffer.java:318)
java.nio.Buffer.position(java/nio/Buffer.java:293)
java.nio.ByteBuffer.position(java/nio/ByteBuffer.java:1094)
org.logstash.common.io.RecordIOReader.consumeBlock(org/logstash/common/io/RecordIOReader.java:184)
org.logstash.common.io.RecordIOReader.consumeToStartOfEvent(org/logstash/common/io/RecordIOReader.java:238)
org.logstash.common.io.RecordIOReader.readEvent(org/logstash/common/io/RecordIOReader.java:282)
org.logstash.common.io.DeadLetterQueueReader.pollEntryBytes(org/logstash/common/io/DeadLetterQueueReader.java:144)
org.logstash.common.io.DeadLetterQueueReader.pollEntry(org/logstash/common/io/DeadLetterQueueReader.java:121)
org.logstash.input.DeadLetterQueueInputPlugin.run(org/logstash/input/DeadLetterQueueInputPlugin.java:104)
it happens when a reference from sincedb file is used.
Please could you provide a listing of your DLQ segments?
ls -lart /usr/share/logstash/data/dead_letter_queue/oam_filebeat_7x_in
Here is a (shortened) output of the directory:
logstash@prod-system:~$ ls -lart /usr/share/logstash/data/dead_letter_queue/oam_filebeat_7x_in
total 6360
-rw-r--r-- 1 logstash logstash 1 Feb 14 2022 1.log
-rw-r--r-- 1 logstash logstash 1 Feb 17 2022 2.log
-rw-r--r-- 1 logstash logstash 1 Feb 21 2022 3.log
-rw-r--r-- 1 logstash logstash 1 Feb 21 2022 4.log
-rw-r--r-- 1 logstash logstash 1 Feb 21 2022 5.log
-rw-r--r-- 1 logstash logstash 1 Feb 21 2022 6.log
-rw-r--r-- 1 logstash logstash 1 Feb 21 2022 7.log
-rw-r--r-- 1 logstash logstash 1 Feb 22 2022 8.log
-rw-r--r-- 1 logstash logstash 1 Feb 22 2022 9.log
[ 1567 more lines of this ]
-rw-r--r-- 1 logstash logstash 1 Sep 28 2022 1576.log
-rw-r--r-- 1 logstash logstash 1 Sep 28 2022 1577.log
-rw-r--r-- 1 logstash logstash 1 Sep 28 2022 1578.log
drwxr-xr-x 20 logstash logstash 4096 Dec 9 11:46 ..
-rw-r--r-- 1 logstash logstash 0 May 22 17:47 .lock
-rw-r--r-- 1 logstash logstash 1 May 22 17:47 1579.log.tmp
drwxr-xr-x 2 logstash logstash 36864 May 22 17:47 .
I hope this helps?
I've tried to reproduce locally with fresh installation of 7.17.8
and creating empty segment files, and the reader doesn't crash.
7.17.8
and unpack from https://www.elastic.co/downloads/past-releases/logstash-7-17-8<ls_unpacked_root>/data/dead_letter_queue/main
input {
dead_letter_queue {
path => "/path/to/logstash/data/dead_letter_queue/"
pipeline_id => "main"
}
}
output {
stdout {}
}
Please could you check if this set up generates your problem in your environment?
The files that script creates bear no resemblance to the files I have. My files:
hexdump -C 2.log
00000000 31 |1|
00000001
Your files:
hexdump -C 2.log
00000000 01 |.|
00000001
I thereby didn't check if these files pose any problems in my setup.
I retested with 7.17.10 and got this stack trace (names slightly editied as it is from a customer):
[2023-06-15T12:15:26,025][ERROR][logstash.javapipeline ][dead_letter_queue_valve][0b80bf3de007ce3172abef211e15b38452ffe788303f7411b801cf4b2d02cfab] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:dead_letter_queue_valve
Plugin: <LogStash::Inputs::DeadLetterQueue pipeline_id=>"oam_test_filebeat_in", path=>"/usr/share/logstash/data/dead_letter_queue/", add_field=>{"[dlq][pipeline]"=>"oam_test_filebeat_in"}, id=>"0b80bf3de007ce3172abef211e15b38452ffe788303f7411b801cf4b2d02abfc", commit_offsets=>true, enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0f52d278-6529-4319-b6d5-b78ab5a33f21", enable_metric=>true, charset=>"UTF-8">>
Error:
Exception: Java::JavaNio::InvalidMarkException
Stack: java.nio.Buffer.reset(java/nio/Buffer.java:399)
java.nio.ByteBuffer.reset(java/nio/ByteBuffer.java:1133)
org.logstash.common.io.RecordIOReader.consumeBlock(org/logstash/common/io/RecordIOReader.java:188)
org.logstash.common.io.RecordIOReader.consumeToStartOfEvent(org/logstash/common/io/RecordIOReader.java:238)
org.logstash.common.io.RecordIOReader.readEvent(org/logstash/common/io/RecordIOReader.java:282)
org.logstash.common.io.DeadLetterQueueReader.pollEntryBytes(org/logstash/common/io/DeadLetterQueueReader.java:144)
org.logstash.common.io.DeadLetterQueueReader.pollEntry(org/logstash/common/io/DeadLetterQueueReader.java:121)
org.logstash.input.DeadLetterQueueInputPlugin.run(org/logstash/input/DeadLetterQueueInputPlugin.java:104)
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(jdk/internal/reflect/NativeMethodAccessorImpl.java:62)
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:566)
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:426)
org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:293)
usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_dead_letter_queue_minus_1_dot_1_dot_12.lib.logstash.inputs.dead_letter_queue.run(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-dead_letter_queue-1.1.12/lib/logstash/inputs/dead_letter_queue.rb:74)
usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.inputworker(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:410)
usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_input(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:401)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:318)
java.lang.Thread.run(java/lang/Thread.java:829)
The files that script creates bear no resemblance to the files I have
My bad, you are right, DLQ saves the 1 char value and not the 1 integer value:
I retested with 7.17.10 and got this stack trace ...
This is a different stack trace, it contains a InvalidMarkException
instead of the original IllegalArgumentException
. I'll give it a look
I've updated the gist to create the segment files with the 1 character instead of the number.
The files that script creates bear no resemblance to the files I have
My bad, you are right, DLQ saves the 1 char value and not the 1 integer value:
I retested with 7.17.10 and got this stack trace ...
This is a different stack trace, it contains a
InvalidMarkException
instead of the originalIllegalArgumentException
. I'll give it a look
Thank you for looking into it, unfortunately my knowledge of Java is too limited to debug this myself. It is really appreciated.
Plugins installed
JVM
provided by container
"Using bundled JDK: /usr/share/logstash/jdk"
"jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.16+8 on 11.0.16+8 +indy +jit [linux-x86_64]"
OS version
Linux testmachine 4.19.0-23-amd64 #1 SMP Debian 4.19.269-1 (2022-12-20) x86_64 GNU/Linux
Description of the problem including expected versus actual behavior
Logstash is unable to start throwing
java.lang.IllegalArgumentException: newPosition < 0: (-1 < 0)
logstash fails at startup with the following stack-trace. Please note, this can either not be the same as #14605 or the backport #14752 did not address this correctly . This fails with a different stack trace and still occurs in 7.17.7
Steps to reproduce
Take the files from #14605 (attached ) with this config:
dlq_sincedb_log.zip
copy the sincedb (with a preceding dot) file and the 1 byte files to /usr/share/logstash/data/dead_letter_queue/ (from the viewpoint of the container)
Provide logs (if relevant)
[2023-02-27T16:26:43,659][ERROR][logstash.javapipeline ][dead_letter_queue_valve] Pipeline error {:pipeline_id=>"dead_letter_queue_valve", :exception=>java.lang.IllegalArgumentException: newPosition < 0: (-1 < 0), :backtrace=>["java.nio.Buffer.createPositionException(java/nio/Buffer.java:318)", "java.nio.Buffer.position(java/nio/Buffer.java:293)", "java.nio.ByteBuffer.position(java/nio/ByteBuffer.java:1094)", "org.logstash.common.io.RecordIOReader.consumeBlock(org/logstash/common/io/RecordIOReader.java:184)", "org.logstash.common.io.RecordIOReader.consumeToStartOfEvent(org/logstash/common/io/RecordIOReader.java:238)", "org.logstash.common.io.RecordIOReader.readEvent(org/logstash/common/io/RecordIOReader.java:282)", "org.logstash.common.io.RecordIOReader.seekToNextEventPosition(org/logstash/common/io/RecordIOReader.java:156)", "org.logstash.common.io.DeadLetterQueueReader.seekToNextEvent(org/logstash/common/io/DeadLetterQueueReader.java:87)", "org.logstash.input.DeadLetterQueueInputPlugin.setInitialReaderState(org/logstash/input/DeadLetterQueueInputPlugin.java:95)", "org.logstash.input.DeadLetterQueueInputPlugin.lazyInitQueueReader(org/logstash/input/DeadLetterQueueInputPlugin.java:74)", "org.logstash.input.DeadLetterQueueInputPlugin.register(org/logstash/input/DeadLetterQueueInputPlugin.java:81)", "jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)", "jdk.internal.reflect.NativeMethodAccessorImpl.invoke(jdk/internal/reflect/NativeMethodAccessorImpl.java:62)", "jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:566)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:441)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:305)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_dead_letter_queue_minus_1_dot_1_dot_12.lib.logstash.inputs.dead_letter_queue.register(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-dead_letter_queue-1.1.12/lib/logstash/inputs/dead_letter_queue.rb:57)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:233)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1821)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:232)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_inputs(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:391)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:316)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.run(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:190)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:142)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:318)", "java.lang.Thread.run(java/lang/Thread.java:829)"], "pipeline.sources"=>["/usr/share/logstash/pipelines/dead_letter_queue_valve/active/100-input-dlq.config", "/usr/share/logstash/pipelines/dead_letter_queue_valve/active/101-filter-drop-events-older-than-35d.config", "/usr/share/logstash/pipelines/dead_letter_queue_valve/active/102-test-case.config", "/usr/share/logstash/pipelines/dead_letter_queue_valve/active/200-filter-extract-fields.config", "/usr/share/logstash/pipelines/dead_letter_queue_valve/active/201-filter-add-dlq-event-hash-for-jdbc-common-out.config", "/usr/share/logstash/pipelines/dead_letter_queue_valve/active/300-output-flatfile.config", "/usr/share/logstash/pipelines/dead_letter_queue_valve/active/301-output-elastic-siem.config"], :thread=>"#<Thread:0x249baad4 run>"}