Open chhuang0123 opened 7 years ago
We encounter the same error message on logstash-output-sqs-4.0.0 with ElasticSearch 5/5.2.1/5.4
Version: logstash-output-sqs-4.0.0
Error Message:
02:12:21.527 [LogStash::Runner] FATAL logstash.runner - An unexpected error occurred! {:error=>#<Aws::SQS::Errors::InvalidParameterValue: The request must contain the parameter MessageGroupId.>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/plugins/raise_response_errors.rb:15:in
call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/aws-sdk-core/plugins/param_converter.rb:20:in call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/plugins/response_target.rb:21:in
call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/request.rb:70:in send_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/base.rb:207:in
send_message_batch'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-sqs-4.0.0/lib/logstash/outputs/sqs.rb:173:in send_message_batch'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-sqs-4.0.0/lib/logstash/outputs/sqs.rb:153:in
multi_receive_encoded_batch'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-sqs-4.0.0/lib/logstash/outputs/sqs.rb:121:in multi_receive_encoded'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:90:in
multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:12:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in
multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:336:in output_batch'", "org/jruby/RubyHash.java:1342:in
each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:335:in output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:293:in
worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:263:in start_workers'"]} 5.2.1
I have this problem as well. It looks like the aws-sdk needs to be updated to at least 2.6.24 according to this comment on an issue in the aws-sdk-ruby repo, which was when they added support for the message group ID (which is required for FIFO queues.) I also see that it may not be a completely straightforward fix since the dependency seems to be defined in logstash-mixin-aws and so may affect quite a few plugins.
I have following library versions -
I still get this error. @toddprater do you happen to know anything about this? My output config looks like this -
sqs { queue => "_sqs_queue_" region => "_aws_region_" codec => "json" batch_events => 10 proxy_uri => "https://proxy:3128/" }
[2019-02-18T13:11:21,904][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.202/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:20:in
call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.202/lib/aws-sdk-core/plugins/idempotency_token.rb:18:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.202/lib/aws-sdk-core/plugins/param_converter.rb:20:in
call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.202/lib/seahorse/client/plugins/response_target.rb:21:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.202/lib/seahorse/client/request.rb:70:in
send_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-core-2.11.202/lib/seahorse/client/base.rb:207:in block in define_operation_methods'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-sqs-5.1.2/lib/logstash/outputs/sqs.rb:172:in
send_message_batch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-sqs-5.1.2/lib/logstash/outputs/sqs.rb:142:in block in multi_receive_encoded_batch'", "org/jruby/RubyArray.java:1734:in
each'", "org/jruby/RubyEnumerable.java:1067:in each_with_index'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-sqs-5.1.2/lib/logstash/outputs/sqs.rb:133:in
multi_receive_encoded_batch'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-sqs-5.1.2/lib/logstash/outputs/sqs.rb:120:in multi_receive_encoded'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:87:in
multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:114:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:97:in
multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:373:in block in output_batch'", "org/jruby/RubyHash.java:1343:in
each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:372:in output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:324:in
worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:287:in `block in start_workers'"]}
It seems FIFO SQS are not supported, am I right ? I was able to setup an regular SQS and plugged into my logstash setup (as output/input) to monitor my waiting queue. Could you let us know when a FIFO SQS will be supported ?
For project requirement, we create a new FIFO SQS to grantee no duplicate issue. SO, we add additional config to separate two SQSs (Standard and FIFO). After restarting logstash-shipper, we could see log records are inserted in Standard SQS. However, there are not any records in FIFO SQS. Additional, we get a lot of error messages in our log files.
Version: logstash-output-sqs-2.0.2
Operating System: Ubuntu 14.04
Config File (if you have sensitive info, please remove it):
output { if [@sqs_type] == "fifo" { sqs { queue => "Test.fifo" region => "us-east-1" } } else { sqs { queue => "Test" region => "us-east-1" } } }
Error Message: {:timestamp=>"2017-06-21T08:50:26.684000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"AWS::SQS::Errors::InvalidParameterValue", :backtrace=