logstash-plugins / logstash-output-s3

Apache License 2.0
58 stars 151 forks source link

Bucket does not exist #164

Closed stuffandthings closed 4 years ago

stuffandthings commented 6 years ago

Hello,

I'm trying to output some events to an S3 bucket. My Logstash node is running on EC2 and has an instance profile associated with it giving it S3 permissions.

I have confirmed the permissions to be working using the awscli:

aws s3 ls [bucket-name]
aws s3 cp test.txt s3://[bucket-name]

In my Logstash logs I see the following message:

[2017-11-07T17:45:14,582][ERROR][logstash.outputs.s3      ] Error validating bucket write permissions! {:message=>"The specified bucket does not exist", :class=>"Aws::S3::Errors::NoSuchBucket", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/plugins/raise_response_errors.rb:15:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/aws-sdk-core/plugins/s3_sse_cpk.rb:19:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/aws-sdk-core/plugins/s3_accelerate.rb:33:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/aws-sdk-core/plugins/param_converter.rb:20:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/plugins/response_target.rb:21:in `call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/request.rb:70:in `send_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.3.22/lib/seahorse/client/base.rb:207:in `put_object'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-resources-2.3.22/lib/aws-sdk-resources/services/s3/file_uploader.rb:42:in `put_object'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-resources-2.3.22/lib/aws-sdk-resources/services/s3/file_uploader.rb:52:in `open_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-resources-2.3.22/lib/aws-sdk-resources/services/s3/file_uploader.rb:41:in `put_object'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-resources-2.3.22/lib/aws-sdk-resources/services/s3/file_uploader.rb:34:in `upload'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-resources-2.3.22/lib/aws-sdk-resources/services/s3/object.rb:251:in `upload_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-4.0.11/lib/logstash/outputs/s3/write_bucket_permission_validator.rb:43:in `upload_test_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-4.0.11/lib/logstash/outputs/s3/write_bucket_permission_validator.rb:18:in `valid?'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-s3-4.0.11/lib/logstash/outputs/s3.rb:200:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}

I have also confirmed that the [bucket-name] in the logstash.config file matches the one I tested using the awscli for sanity.

Here's my logstash.config:

input {
  beats {
    port => 5044
  }
}

filter {
  grok {
    match => { "message" => ["(i?)gooby"] }
    add_tag => ["dolan"]
  }
}

output {
  elasticsearch {
    hosts => [ "<removed>" ]
    index => "logstash-%{+YYYY.MM.dd}"
  }
  if "dolan" in [tags] {
    s3 {
      region => "us-east-1"
      bucket => "[bucket-name]"
    }
  }
}
stuffandthings commented 6 years ago

Update: I was able to work around the issue by adding validate_credentials_on_root_bucket => false in the s3 output

    s3 {
      region => "us-east-1"
      bucket => "[bucket-name]"
      validate_credentials_on_root_bucket => false
    }