Closed lexicalunit closed 9 years ago
Hello,
Thanks for the complete log. You don't need to put in configuration values for access & secret key, and in fact you shouldn't as this isn't very secure. Instead, run your Elastic Beanstalk application with an EC2 Role that will give it the required permissions, such as the ability to read cloudwatch, kinesis, dynamo db etc. Then you just need to configure parameter 'config-file-url', and use a format such as 's3://mybucket/my-config-file.json'. Lastly, ensure that the config file URL is readable by this EC2 Instance Role.
Hopefully that will get you up and running!
Thx,
Ian
Oops, I incorrectly used underscores in the key name config-file-url
. I'll fix that.
As for required permissions. Do you mean IAM Roles or EC2 Security Groups? I'm not sure what a "EC2 Role" is.
Yes, sorry for the confusion - I meant the IAM Role that is applied to the EC2 instance in the Elastic Beanstalk Fleet.
Sweet! It's working :)
Initially I created a very open policy to avoid permissions issues. I am now removing Action
items to restrict the policy to only what is necessary. Right now I have something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "...",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "..."
},
{
"Sid": "...",
"Action": [
"cloudwatch:GetMetricStatistics"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "...",
"Action": [
"kinesis:GetShardIterator",
"kinesis:MergeShards",
"kinesis:SplitShard"
],
"Effect": "Allow",
"Resource": "..."
}
]
}
I'll eventually discover through experimentation, but would much appreciate it if you could let me know if there is anything in this policy that I'm missing. Might also be something good to add to the README or wiki perhaps?
Glad to hear it. I agree that we need to show a policy that is the minimum for this module to run, and I'm going to leave this issue open until I've had time to put this together. In the meantime you can start with a policy that is read only on CloudWatch, S3 and read only plus merge/split shards in Kinesis.
:+1: Y'all rock! Having this functionality is going to be awesome for us.
Just to note, we had to add
"kinesis:DescribeStream",
To the above policy.
I went ahead and wrote up some more detail on the permission requirements in a fork of the repo: https://github.com/scopely/amazon-kinesis-scaling-utils#iam-role
Unfortunately I don't have a more descriptive title for this ticket. I'll describe what I've done so far. I am using version 0.9.1.5.
Using Elastic Beanstalk, I installed the Kinesis Autoscaling WAR to a Web Server with Tomcat predefined configuration. Here is my autoscaling configuration:
I created the
mytest
stream with 6 shards. There is absolutely no data being pushed through the stream. I would expect the stream to be downscaled to 1 shard over the last weekend, but this morning I'm still seeing 6 shards. I am hosting the configuration on S3, and I've set the permissions on the file such that anyone can download it. So the application should have access to grab the config.Within the Configuration/Software Configuration settings in the EB application, I have provided values for:
AWS_ACCESS_KEY_ID
AWS_SECRET_KEY
config_file_url
The format of the
config_file_url
value ishttps://s3.amazonaws.com/my-bucket/kin-scale-config.json
. The AWS keys are the keys for the account that created the Kinesis stream.I can get the logs from the EB instance. Everything looks fine, there are no errors obvious to me. Sensitive data has been omitted: