Closed constanca-m closed 1 month ago
Constanca really looks great! And I thing is valuable addition for the users.
Some more ideas:
And one more question: Lets say I want to configure s3-sqs, then users needs to go and
inputs = [
{
type = "s3-sqs"
id = "arn:aws:sqs:us-east-1:627286350134:sqs*"
outputs = [
...
Plus he needs to incldue
s3-buckets = [ "arn:aws:s3:::gizas-se-test2"]
I think this needs to be included in the documentation example. It is not clear this. Maybe we need to make a section to describe what variables the user needs to configure per scenario
[Bonus Issue] I add here the issue I see in my local tests:
│ Error: creating Lambda Function (gizas-lamda): operation error Lambda: CreateFunction, https response error StatusCode: 400, RequestID: 09394998-b43d-4260-bf6d-eeaf6483a204, InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: PermanentRedirect. S3 Error Message: The bucket is in this region: eu-central-1. Please use this region to retry the request
│
│ with module.esf-lambda-function.aws_lambda_function.this[0],
│ on .terraform/modules/esf-lambda-function/main.tf line 24, in resource "aws_lambda_function" "this":
│ 24: resource "aws_lambda_function" "this" {
with vars.auto:
aws_region = "us-east-1"
# config-file-bucket = "arn:aws:s3:::gizas2-bucket-esf-bucket"
lambda-name = "gizas-lamda"
release-version = "lambda-v1.9.0"
# config-file-local-path = "./config.yaml"
inputs = [
{
type = "cloudwatch-logs"
id = "arn:aws:logs:us-east-1:627286350134:log-group:gizas-bucket-cloudwatch-lg:*"
outputs = [
{
type = "elasticsearch"
args = {
elasticsearch_url = "https://f70af2e2076746faa05413b33a371298.us-central1.gcp.cloud.es.io"
api_key = "mVTAxTVNFdE1RUT09AAAAAB6pFQk="
es_datastream_name = "logs-esf.cloudwatch-default"
}
}
]
}
]
We should configure a terraform output section where we should print queues and especially the s3 config-bucket
Which queues? @gizas
I can add the ouput for the S3 bucket name, it is a very simple change and it is also useful to the project of benchmarking.
In the documentation we should include
I am a bit hesitant on doing this, because if the types change we need to also update the documentation. I will link the official documentation instead, I think that is easier.
I think this needs to be included in the documentation example.
:+1:
Which queues lamda-continuing-queue-dlq lamda-replay-queue-dlq
Maybe also the lamda function? I think if the user knows the s3 config bucket, the queues and the lamda can go and find the rest
because if the types change we need to also update the documentation
Indeed. Maybe to have some suggestions? At least to help users on how to write the input
Maybe also the lamda function? I think if the user knows the s3 config bucket, the queues and the lamda can go and find the rest
Yes, I will add an outputs.tf
then.
Indeed. Maybe to have some suggestions? At least to help users on how to write the input
I will add the possible ones. They are also in the custom validation rule of the variable hardcoded.
Thanks @gizas for bringing this issue up:
I add here the issue I see in my local tests:
│ Error: creating Lambda Function (gizas-lamda): operation error Lambda: CreateFunction, https response error StatusCode: 400, RequestID: 09394998-b43d-4260-bf6d-eeaf6483a204, InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: PermanentRedirect. S3 Error Message: The bucket is in this region: eu-central-1. Please use this region to retry the request
│
│ with module.esf-lambda-function.aws_lambda_function.this[0],
│ on .terraform/modules/esf-lambda-function/main.tf line 24, in resource "aws_lambda_function" "this":
│ 24: resource "aws_lambda_function" "this" {
So far I could not solve this...
Giving a bit of context on what is happening: we have a S3 bucket in our Elastic production account. This bucket needs to be accessible to other accounts as well. This bucket is located in eu-central-1
region. Since this bucket needs to be accessible to other accounts, we need to create an Access Point. I also studied the option of creating a new role to give permissions to read the bucket, but this is not a viable option since we do not know the users previously. With an access point, we can give the necessary read permissions without filtering by users.
However, when we link the dependencies bucket to our lambda function, we specify it in s3_existing_package
. This assumes that the zip file is in a S3 bucket in the same region as our lambda (and all our other resources). If it is not in the same region, it fails.
So now we need to find a workaround.
I have thought of doing the following:
eu-central-1
.
builds
local directory. I am unsure if this means that we need to have python locally, which then adds a new dependency...Following https://github.com/elastic/terraform-elastic-esf/pull/7#issuecomment-2088677523, I decided to abandon that approach. Now we do the following:
This way we no longer need to worry about the resources being all in the eu-central-1
region.
I tried with ESF on eu-west-2
region. Proof that it now works:
(This is my ESF data stream logs)
Thanks again @gizas for pointing the problem.
I also added the outputs.tf
file in the latest commit.
We download the zip file from the S3 production bucket
I think is better this way. We might have problems later on when we might support more architectures but for now is ok
We upload that zip file to the user S3 bucket - this is important to avoid deploying it locally
This is also working. I tested it and works fine! Wondering if only you can provide the alternative option to upload from a local folder? Because you already have the steps in a previous PR. But is not important for now
Thank you for the output!
Wondering if only you can provide the alternative option to upload from a local folder? Because you already have the steps in a previous PR. But is not important for now
I could do that, but do you think that customers would use that? I think that is only needed for us @gizas
What does this PR do?
Please see issue for the context.
With this change, we now need to upload a
config.yaml
to an S3 bucket (it can be provided, otherwise we create it). The content for the file will be built using the value of theinputs
variable or a local configuration file the user has. See next section for more details on this.Details
Please see issue for the context.
You can also find (most of) this section in the README.md file.
This is what is happening now:
Building
config.yaml
fileWhen applying these configuration files, a
config.yaml
file will always be uploaded to an S3 bucket. This S3 bucket will be the one specified inconfig-file-bucket
, or, if the value is left empty, a new S3 bucket will be created.Following this, we will create the content for the
config.yaml
file. This file will be built based on:inputs
. This variable is not required.config-file-local-path
. This variable is also not required.If both variables are provided, both will be considered. Otherwise, just the one that was given. If none are provided, the
config.yaml
file will be:It does not make sense to leave both empty.
You can see the following examples on the resulting
config.yaml
file.Configure just the
inputs
variableConfigure the
inputs
variable as:Do not configure the
(empty) since that is the default.
config-file-bucket
variable, which will be left asThe
config.yaml
placed inside the bucket will be:Configure just the
config-file-local-path
variableDo not configure the
inputs
variable, which will be left as[]
since that is the default.Configure
config-file-local-path
variable:And the local
config.yaml
file looks like:The
config.yaml
placed inside the bucket will be:Configure both variables
Configure both
inputs
andconfig-file-local-path
like in the previous examples.The
config.yaml
placed inside the bucket will be:Example result
For this example, I am just using the
inputs
variable. My*.auto.tfvars
looks like this:Since I don't specify the S3 bucket,
config-file-bucket
variable, then that will also be created. The content of theconfig.yaml
file uploaded there will be just the one specified in theinputs
variables.I have a clouwatch logs group receiving logs - the one I specified.
Discover looks like this: