Open tcalvillo opened 8 months ago
The documentation does state what pairs of settings it uses when:
Static configuration, using access_key_id and secret_access_key params in logstash plugin config External credentials file specified by aws_credentials_file Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY IAM Instance Profile (available when running inside EC2)
The required
marker for a setting is only Yes
when that setting is truly mandatory.
In this case those settings are not mandatory but some form of authentication (i think) is mandatory. Not sure how we represent that for other plugins but maybe @karenzone can chime in
Currently, in order to access S3 buckets the plugin requires any of [<access_key_id
, secret_access_key
pair>, aws_credentials_file
, role_arn
]. That's why they are optional.
Is your intention to access public bucket without creds? If so, I don't see the way for now.
Currently, in order to access S3 buckets the plugin requires any of [<
access_key_id
,secret_access_key
pair>,aws_credentials_file
,role_arn
]. That's why they are optional.Is your intention to access public bucket without creds? If so, I don't see the way for now.
Hello Mashhurs,
Thank you for your reply and for the clarification. Yes, my intention was to access my own bucket without creds because in AWS I do use a role that provides me the proper permissions and, therefore, I cannot provide access_key_id and secret_access_key pair. You may tell me that I could create in AWS a user called logstash and provide to this user the access_key_id and secret_access_key pair and I will be fine. However, I do not have enough permissions to create users (and my company wouldn't want it for security reasons) so I feel that I'm stuck because I don't have the ability to send logs to S3.
I called AWS and they told me that is not possible to setup key_id and secret_access_key to a role. I believed that providing a policy with S3 permissions attached to the EC2 instance would be good enough but doesn't seem to be the case.
I'm a newbie with logstash, so thank you for the patience. Any help/suggestions will be appreciated.
Regards, Tizi
Hello Mashhurs,
Thank you for your reply and for the clarification. Yes, my intention was to access my own bucket without creds because in AWS I do use a role that provides me the proper permissions and, therefore, I cannot provide access_key_id and secret_access_key pair. You may tell me that I could create in AWS a user called logstash and provide to this user the access_key_id and secret_access_key pair and I will be fine. However, I do not have enough permissions to create users (and my company wouldn't want it for security reasons) so I feel that I'm stuck because I don't have the ability to send logs to S3.
I called AWS and they told me that is not possible to setup key_id and secret_access_key to a role. I believed that providing a policy with S3 permissions attached to the EC2 instance would be good enough but doesn't seem to be the case.
If you are running Logstash in EC2 with IAM role attached and you have role ARN, you can set it to role_arn
of the s3-output plugin. Logstash will able to access S3 through EC2 metadata (http://169.254.169.254/latest/meta-data/). Make sure no proxy on metadata service IP (export NO_PROXY=169.254.169.254
).
Hello Mashhurs, Thank you for your reply and for the clarification. Yes, my intention was to access my own bucket without creds because in AWS I do use a role that provides me the proper permissions and, therefore, I cannot provide access_key_id and secret_access_key pair. You may tell me that I could create in AWS a user called logstash and provide to this user the access_key_id and secret_access_key pair and I will be fine. However, I do not have enough permissions to create users (and my company wouldn't want it for security reasons) so I feel that I'm stuck because I don't have the ability to send logs to S3. I called AWS and they told me that is not possible to setup key_id and secret_access_key to a role. I believed that providing a policy with S3 permissions attached to the EC2 instance would be good enough but doesn't seem to be the case.
If you are running Logstash in EC2 with IAM role attached and you have role ARN, you can set it to
role_arn
of the s3-output plugin. Logstash will able to access S3 through EC2 metadata (http://169.254.169.254/latest/meta-data/). Make sure no proxy on metadata service IP (export NO_PROXY=169.254.169.254
).
Hello Mashhurs,
I just tried using my role ARN but it fails:
[main] Uploading failed, retrying (#168 of Infinity) {:exception=>ArgumentError, :message=>":key must not be blank", :path=>"/etc/logstash/tmp/logstash/ciao.txt",
I will post below my first-pipeline.conf: input { beats { port => 5044 } }
output { s3{ region => "eu-west-1" bucket => "mybucket" role_arn => "arn:aws:iam::111111111111:role/role-secops-infrastructureadministrator"
# secret_access_key => ""
} }
I checked and both my firewall and proxy are turned off. Please let me know if I did anything incorrect. thanks
Regards, Tizi
Hello @mashhurs may I ask for your help on my last comment? Thank you in advance.
Regards, Tizi
If you are running Logstash in EC2 with IAM role attached and you have role ARN, you can set it to
role_arn
of the s3-output plugin. Logstash will able to access S3 through EC2 metadata (http://169.254.169.254/latest/meta-data/). Make sure no proxy on metadata service IP (export NO_PROXY=169.254.169.254
).Hello Mashhurs, I just tried using my role ARN but it fails:
[main] Uploading failed, retrying (#168 of Infinity) {:exception=>ArgumentError, :message=>":key must not be blank", :path=>"/etc/logstash/tmp/logstash/ciao.txt",
I will post below my first-pipeline.conf: input { beats { port => 5044 } }
output { s3{ region => "eu-west-1" bucket => "s3-thales-alpha-d-ew1-centralization" role_arn => "arn:aws:iam::466526841974:role/role-secops-infrastructureadministrator" # access_key_id => "ciao.txt" # secret_access_key => "" } }
I checked and both my firewall and proxy are turned off. Please let me know if I did anything incorrect. thanks
Regards, Tizi
You are facing a strange error where file doesn't contain key, coming through this line if you are using a final version of the plugin.
Did you make any config changes beside S3 creds? Can you clean your temporary folders and re-run?
If you are running Logstash in EC2 with IAM role attached and you have role ARN, you can set it to
role_arn
of the s3-output plugin. Logstash will able to access S3 through EC2 metadata (http://169.254.169.254/latest/meta-data/). Make sure no proxy on metadata service IP (export NO_PROXY=169.254.169.254
).Hello Mashhurs, I just tried using my role ARN but it fails:
[main] Uploading failed, retrying (#168 of Infinity) {:exception=>ArgumentError, :message=>":key must not be blank", :path=>"/etc/logstash/tmp/logstash/ciao.txt",
I will post below my first-pipeline.conf: input { beats { port => 5044 } } output { s3{ region => "eu-west-1" bucket => "mybucket" role_arn => "arn:aws:iam::111111111111:role/myrole" # access_key_id => "ciao.txt" # secret_access_key => "" } } I checked and both my firewall and proxy are turned off. Please let me know if I did anything incorrect. thanks Regards, TiziYou are facing a strange error where file doesn't contain key, coming through this line if you are using a final version of the plugin.
Did you make any config changes beside S3 creds? Can you clean your temporary folders and re-run?
Hello @mashhurs , thanks for your reply. Once change that I made was in /etc/logstash/ where I changed in the jvm.options the parameter from -Djava.io.tmpdir=$HOME (I commented it out) to -Djava.io.tmpdir=/etc/logstash/tmp/ I then recursively granted ownership to logstash:logstash to the whole dir /etc/logstash/tmp/ I tried cleaning the temporary folders and re-run but I got the same error. I even tried deleting the file just in case was corrupted and creating a new one but I got the same error.
Regards, Tizi
Hello @mashhurs , may I ask you if you could please edit your reply from 5 days ago and remove bucket name and role? thank you very much.
Hello @mashhurs , may I ask you if you could please edit your reply from 5 days ago and remove bucket name and role? thank you very much.
As my understanding from your last error message, in temporary folder there are some left over files the plugin is trying to restore. If you can clean (back up the data in case if you need in the future) temporary dir and rerun we could see if that was a cause. I don't see you have a temporary_directory
setup in your s3-output
pipeline setting, if you have it please clean that dir.
Hello @mashhurs , may I ask you if you could please edit your reply from 5 days ago and remove bucket name and role? thank you very much.
As my understanding from your last error message, in temporary folder there are some left over files the plugin is trying to restore. If you can clean (back up the data in case if you need in the future) temporary dir and rerun we could see if that was a cause. I don't see you have a
temporary_directory
setup in yours3-output
pipeline setting, if you have it please clean that dir.
Hello @mashhurs , thank you for your reply. Today I added a temporary directory in my first-pipeline.conf (located in /usr/share/logstash/). In /usr/share/logstash/temporary_directory I removed my test file (now is completely empty) and I ran the pipeline. I got this error:
input { beats { ports => 5044 } } output { s3 { region => "eu-west-1", bucket => "my_bucket_name", rotation_strategy => "time", time_file => 1, temporary_directory => "/usr/share/logstash/temporary_directory, role_arn => "my_arn_here" } }
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\", \"}\" at line 8, column 26 (byte 84) after output {\n s3 {\n region => \"eu-west-1\"", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:239:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:173:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "org/jruby/RubyClass.java:931:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
NOTE: I did realize that I missed the commas, looks like I needed it in my first-pipeline.conf. However, I still got errors.
I tried as well by removing the commas in first-pipeline.conf and running again the pipeline but I got this other error:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\", \"}\" at line 13, column 18 (byte 274) after output {\n s3 {\n region => \"eu-west-1\"\n bucket => \"s3-thales-alpha-d-ew1-centralization\"\n rotation_strategy => \"time\"\n time_file => 1\n temporary_directory => \"/usr/share/logstash/temporary_directory\n role_arn => \"", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in
compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:239:in initialize'", "org/logstash/execution/AbstractPipelineExt.java:173:in
initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in initialize'", "org/jruby/RubyClass.java:931:in
new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in
block in converge_state'"]}`
Logstash information:
Please include the following information:
bin/logstash --version
): 8.12.0JVM (e.g.
java -version
): 11.0.22If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
java -version
)JAVA_HOME
environment variable if set.OS version (
uname -a
if on a Unix-like system): Linux ip-10-147-116-224.xxx 3.10.0-1160.108.1.el7.x86_64 #1 SMP Thu Jan 25 16:17:31 UTC 2024 x86_64 x86_64 x86_64 GNU/LinuxDescription of the problem including expected versus actual behavior: In the documentation: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-s3.html, is clearly written (look the example in "Usage" and "S3 Output Configuration Options") that both access_key_id and secret_access_key are optional. However, if you do NOT include these info, you do get errors. In the specific, if you do NOT provide any of the two information, you will get as result the error: "key must not be blank". If you do provide the access_key_id but NOT the secret_access_key you will get as result the error "unable to sign request without credentials set". The documentation is misleading because makes you believe that only the name of the bucket is required.
Steps to reproduce: cd /usr/share/logstash/bin/ /usr/share/logstash/bin/logstash -f /usr/share/logstash/first-pipeline.conf
Below my "first-pipeline.conf" file:
Please include a minimal but complete recreation of the problem, including (e.g.) pipeline definition(s), settings, locale, etc. The easier you make for us to reproduce it, the more likely that somebody will take the time to look at it.
Provide logs (if relevant):
Thanks for looking into :)
Regards, Tizi
Added by @mashhurs
Expectation
The user expectation with this issue is persisting data on S3 without credentials using
--no-sign-request
of AWS API.