fluent / fluent-plugin-s3

Amazon S3 input and output plugin for Fluentd
https://docs.fluentd.org/output/s3
314 stars 218 forks source link

Access denied when adding `path_key` to fluentd config. #389

Closed anarwal closed 2 years ago

anarwal commented 2 years ago

Describe the bug

I recently added path_key option to my fluentd config. Right after that I started getting the error below. I am looking to add file path the log and then publish logs to s3.

To Reproduce

Deploy fluentd agent to ubuntu or centos instance on AWS

Expected behavior

Logs should be published to S3

Your Environment

- Fluentd version:1.14.3 
- TD Agent version:
- fluent-plugin-s3 version:
- aws-sdk-s3 version:
- aws-sdk-sqs version:
- Operating system:Ubuntu
- Kernel version:

Your Configuration

<source>
  @type tail
  path /var/log/syslog
  pos_file /var/log/calyptia-fluentd/syslog.pos
  path_key tailed_path
  tag sec.syslog
  <parse>
    @type none
  </parse>
</source>
<source>
  @type tail
  path /var/log/auth.log
  pos_file /var/log/calyptia-fluentd/auth.log.pos
  path_key tailed_path
  tag sec.auth
  <parse>
    @type none
  </parse>
</source>
<source>
  @type tail
  path /var/log/dpkg.log
  pos_file /var/log/calyptia-fluentd/dpkg.log.pos
  path_key tailed_path
  tag sec.dpkg
  <parse>
    @type none
  </parse>
</source>
<filter sec.**>
  @type record_transformer
  <record>
    hostname      "#{ENV['LOCAL_HOSTNAME']}"
    instance_id   "#{ENV['INSTANCE_ID']}"
    instance_type "#{ENV['INSTANCE_TYPE']}"
    az            "#{ENV['AZ']}"    
    private_ip    "#{ENV['LOCAL_IPV4']}"
    ami_id        "#{ENV['AMI_ID']}"
    account_id    "#{ENV['ACCOUNT_ID']}"
    region    "#{ENV['REGION']}"
  </record>
</filter>
# matching all events with tag sec.*
<match sec.**>
  # copy the events to both primary and backup bucket
  # docs on copy: https://docs.fluentd.org/output/copy
  @type copy
  <store>
    @type s3
    <instance_profile_credentials>
      ip_address 
      port 
    </instance_profile_credentials>
    @id out_s3_primary
    @log_level info
    acl bucket-owner-full-control
    s3_bucket "#{ENV['S3_PRIMARY_BUCKET_NAME']}"
    s3_region "#{ENV['S3_PRIMARY_BUCKET_REGION']}"
    path "AWSLogs/#{ENV['ACCOUNT_ID']}/#{ENV['APP_NAME']}/#{ENV['REGION']}/#{ENV['INSTANCE_ID']}"
    s3_object_key_format %{path}/%Y/%m/%d/${tag}_%{time_slice}_%{index}.%{file_extension}
    <buffer tag,time>
      @type file
      timekey 60
      timekey_wait 60
      chunk_limit_size 5m
      path /var/log/calyptia-fluentd/buffer/primary
    </buffer>
    <format>
      @type json
    </format>
    <inject>
      time_key time
      time_type string
      time_format %Y-%m-%dT%H:%M:%SZ
    </inject>
  </store>
</match>

Your Error Log

`2022-01-28 04:10:29 +0000 [error]: #0 unexpected error error_class=RuntimeError error="can't call S3 API. Please check your credentials or s3_region configuration. error = #<Seahorse::Client::NetworkingError: execution expired>"`

Additional context

No response

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days

github-actions[bot] commented 2 years ago

This issue was automatically closed because of stale in 30 days