cherweg / logstash-input-s3-sns-sqs

logstash input downloading files from s3 Bucket by OjectKey from SNS/SQS
Other
29 stars 35 forks source link

Doesn't release thread using Centralized pipelines #10

Closed danielkasen closed 5 years ago

danielkasen commented 6 years ago

Looks like if you don't enable pipeline.unsafe_shutdown for logstash then the thread gets stuck in a loop and is never freed when you apply a change in the centralized mgmt in kibana.

{"level":"ERROR","loggerName":"logstash.shutdownwatcher","timeMillis":1524022831132,"thread":"Ruby-0-Thread-58: /usr/share/logstash/logstash-core/lib/logstash/shutdown_watcher.rb:35","logEvent":{"message":"The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information."}}

{"level":"WARN","loggerName":"logstash.shutdownwatcher","timeMillis":1524022836130,"thread":"Ruby-0-Thread-58: /usr/share/logstash/logstash-core/lib/logstash/shutdown_watcher.rb:35","logEvent":{"message":"{\"inflight_count\"=>0, \"stalling_thread_info\"=>{[\"LogStash::Filters::Grok\", {\"match\"=>[\"message\", \"(%{WORD:protocol} )?%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:elb_name} %{IP:elb_client_ip}:%{NUMBER:elb_client_port} %{IP:elb_backend_ip}:%{NUMBER:elb_backend_port} %{NUMBER:request_processing_time} %{NUMBER:backend_processing_time} %{NUMBER:response_processing_time} (?:%{NUMBER:elb_status_code}|-) (?:%{NUMBER:backend_status_code}|-) %{NUMBER:elb_received_bytes} %{NUMBER:elb_sent_bytes} (?:%{QS:elb_request}|-) (?:%{QS:userAgent}|-) %{NOTSPACE:elb_sslcipher} %{NOTSPACE:elb_sslprotocol}\"], \"remove_field\"=>[\"message\"], \"id\"=>\"2cb8c7b9def8a92fa34d9222da84dab5fecad0c203f9c1d49f4911103c59941f\"}]=>[{\"thread_id\"=>82, \"name\"=>nil, \"current_call\"=>\"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:inread_batch'\"}]}}"}}`

{"level":"WARN","loggerName":"logstash.shutdownwatcher","timeMillis":1524023081129,"thread":"Ruby-0-Thread-58: /usr/share/logstash/logstash-core/lib/logstash/shutdown_watcher.rb:35","logEvent":{"message":"{\"inflight_count\"=>0, \"stalling_thread_info\"=>{[\"LogStash::Filters::Grok\", {\"match\"=>[\"message\", \"(%{WORD:protocol} )?%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:elb_name} %{IP:elb_client_ip}:%{NUMBER:elb_client_port} %{IP:elb_backend_ip}:%{NUMBER:elb_backend_port} %{NUMBER:request_processing_time} %{NUMBER:backend_processing_time} %{NUMBER:response_processing_time} (?:%{NUMBER:elb_status_code}|-) (?:%{NUMBER:backend_status_code}|-) %{NUMBER:elb_received_bytes} %{NUMBER:elb_sent_bytes} (?:%{QS:elb_request}|-) (?:%{QS:userAgent}|-) %{NOTSPACE:elb_sslcipher} %{NOTSPACE:elb_sslprotocol}\"], \"remove_field\"=>[\"message\"], \"id\"=>\"2cb8c7b9def8a92fa34d9222da84dab5fecad0c203f9c1d49f4911103c59941f\"}]=>[{\"thread_id\"=>82, \"name\"=>nil, \"current_call\"=>\"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:90:inread_batch'\"}]}}"}}`

christianherweg0807 commented 5 years ago

Hey Daniel, with latest 2.0 (will be released soon). This should be solved. Sometimes threads don´t have enough for a clean shutdown....but thus is a constant timeout of logstash´s shutdownwatcher.

Plz test & comment

regards christian

danielkasen commented 5 years ago

Thanks. Hard for me to test now but feel free to close this :)

christianherweg0807 commented 5 years ago

Tested in our docker environment without pipeline.unsafe_shutdown. No logstash gets stuck. If you habe large files in progress, the shutdown timeout is not enough. The plugin will requeue incomplete files.