Open mchassy opened 4 years ago
This is a MUST HAVE feature and the lack of this support is the primary reason I have had multiple teams chose not to use it.
@duaneking Can you elaborate on what specifically is a must have feature? Being able to access RDS or the partial deploys not being cleaned up properly (https://github.com/aws/chalice/issues/1340).
@mchassy It looks like you're app got into a bad state (tracking work in #1340). Once you were able to configure your own policy files was it working for you? What issues were you running into when you tried that?
@jamesls The must-have feature is that Chalice needs to support RDS instances as part of a deployment and maintenance lifecycle for an app created with it, without harming the instance or its data.
I don't want the RDS instance blown away every time I deploy as part of my ctdd driven ci/cd process, I just want the lambda code to update. Optionally, let me do both but never make that the default as losing data is not good.
I don't want the RDS instance to be statically defined as a singleton so I cant use a different one specific to test, dev, or prod environment deployments based on my own CI/CD, and I don't want to have to worry about the prod RDS instance getting referenced when I'm redeploying the test environment because chalice wants to deploy the wrong thing in its confusion.
I see chalice as having so much potential, but the lack of AWS service lifecycle support for the services people want to use the most absolutely kills its ability to be useful for anything not trivial.
An RDS instance is a critical core aspect of the patterns people want to use with lambda. It should be fully supported and we should have the ability to say "as part of your deployment, assure this rds instance exists, if not bring it online but otherwise don't remove it or redeploy it because we care about that data that already exists there, and so make sure the lambda we are now redeploying is networked to be able to hit it directly without issues so our code "just works' because it's configured to use that host correctly already by pulling the config from the right place.
Thanks for the feedback @duaneking, this is really helpful.
Happy to help.
My primary role is generally to help architect things as well as code them up, so I see a lot of potential here but this project is now old enough that the lack of basic RDS support is a negative against it, and the primary reason it's not chosen over Serverless or Zappa in the roles I spoke of.
When you use Serverless you have to use Nodes NPM and that brings in all its problems and security issues. I would much rather have a pip install of chalice that just works and uses what I need than have to use NodeJS stuff that doesn't even get deployed.
But on the same hand, Zappa has its issues too... yes it installs via pip but it is not perfect either.
This is a very basic shell script which allows me to keep my lambdas in a common python project and only upload what is needed to AWS. It also handles TEST and PROD (though in a very crude way). This is what I would expect any plugin to do, at the least. As it is, we maintain it ourselves. Chalice should, at the least, be able to pick up used modules. Also it seems to do something which means that no code can be edited online (not something I usually want to do, but it is sometimes necessary for quick checks.)
#!/bin/bash
MODE=$1
PRJDIR=$PWD
rm -Rf "$PRJDIR/dist"
mkdir -p "$PRJDIR/dist"
declare -a LAMBDAS=("upload_notification" "ingest_meta" "ingest_survey" "get_embed_url" "ingest_machine")
echo "===================================================================================================="
echo "Updating in $MODE"
echo "===================================================================================================="
for LAMBDA in "${LAMBDAS[@]}"
do
cd "$PRJDIR" || exit
echo "processing: $LAMBDA"
echo "===================================================================================================="
PRODROLE="arn:aws:iam::439359573308:role/top200LamdaRole"
TESTROLE="arn:aws:iam::439359573308:role/TEST-top200LambdaRole"
RUNTIME="python3.8"
HANDLER="handler.$LAMBDA"
EMODS="$PRJDIR/venv/lib/python3.8/site-packages"
IMODS="$PRJDIR/src"
declare -a FULL_INTERNAL_MODULES=("${IMODS}/handler.py" "${IMODS}/slacker.py" "${IMODS}/aws.py" "${IMODS}/access.py" \
"${IMODS}/common.py" "${IMODS}/sheets.py")
declare -a MIN_INTERNAL_MODULES=("${IMODS}/handler.py" "${IMODS}/slacker.py" "${IMODS}/aws.py")
if [ "$LAMBDA" == "ingest_meta" ]
then
declare -a EXTERNAL_MODULES=("${EMODS}/pymysql")
MODULES=("${EXTERNAL_MODULES[@]}" "${FULL_INTERNAL_MODULES[@]}")
elif [ "$LAMBDA" == "ingest_survey" ]
then
declare -a EXTERNAL_MODULES=("${EMODS}/pymysql" "${EMODS}/openpyxl" "${EMODS}/et_xmlfile" "${EMODS}/jdcal.py")
MODULES=("${EXTERNAL_MODULES[@]}" "${FULL_INTERNAL_MODULES[@]}")
elif [ "$LAMBDA" == "upload_notification" ]
then
MODULES=("${MIN_INTERNAL_MODULES[@]}")
elif [ "$LAMBDA" == "get_embed_url" ]
then
MODULES=("${MIN_INTERNAL_MODULES[@]}")
elif [ "$LAMBDA" == "ingest_machine" ]
then
declare -a EXTERNAL_MODULES=("${EMODS}/pymysql" "${EMODS}/openpyxl" "${EMODS}/et_xmlfile" "${EMODS}/jdcal.py")
MODULES=("${EXTERNAL_MODULES[@]}" "${FULL_INTERNAL_MODULES[@]}")
else
echo "Nothing to do"
exit
fi
echo "Module needed for $LAMBDA: "
for M in "${MODULES[@]}"
do
echo "$M"
done
rm -Rf ./build
mkdir -p ./build
for M in "${MODULES[@]}"
do
if ! [ "$M" == "$PRJDIR/src/handler.py" ] ; then
cp -R "$M" "$PRJDIR/build/"
fi
done
cp "$PRJDIR/src/$LAMBDA.py" "$PRJDIR/build/handler.py"
cd "$PRJDIR/build" || exit
chmod -R 775 *
zip -rq "../dist/$LAMBDA.zip" ./*
cd - || exit
cd "$PRJDIR/dist" || exit
CONFIG_PATH="$PRJDIR/config/"
CONFIG_VPC="${CONFIG_PATH}vpc.json"
if [ "$MODE" == "PROD" ]; then
PROJECT="dxc-av_"
CONFIG_ENV="$CONFIG_PATH$PROJECT${LAMBDA}_env_vars.json"
if ! [ -f "$CONFIG_ENV" ]; then
echo "$CONFIG_ENV does not exist"
exit
fi
aws lambda get-function --function-name "$PROJECT${LAMBDA}"
if [ $? -eq 0 ]; then
aws lambda update-function-code --zip-file fileb://"$LAMBDA.zip" --function-name "$PROJECT${LAMBDA}"
else
aws lambda create-function --zip-file fileb://"$LAMBDA.zip" --function-name "$PROJECT${LAMBDA}" \
--role "$PRODROLE" --handler "$HANDLER" --runtime "$RUNTIME"
fi
aws lambda update-function-configuration --function-name "$PROJECT${LAMBDA}" --timeout 60 --environment file://"$CONFIG_ENV" \
--vpc-config file://"$CONFIG_VPC"
elif [ "$MODE" == "TEST" ]; then
PROJECT="test-av_"
CONFIG_ENV="$CONFIG_PATH$PROJECT${LAMBDA}_env_vars.json"
if ! [ -f "$CONFIG_ENV" ]; then
echo "$CONFIG_ENV does not exist"
exit
fi
aws lambda get-function --function-name "$PROJECT${LAMBDA}"
if [ $? -eq 0 ]; then
aws lambda update-function-code --zip-file fileb://"$LAMBDA.zip" --function-name "$PROJECT${LAMBDA}"
else
aws lambda create-function --zip-file fileb://"$LAMBDA.zip" --function-name "$PROJECT${LAMBDA}" \
--role "$TESTROLE" --handler "$HANDLER" --runtime "$RUNTIME"
fi
aws lambda update-function-configuration --function-name "$PROJECT${LAMBDA}" --timeout 60 --environment file://"$CONFIG_ENV" \
--vpc-config file://"$CONFIG_VPC"
else
echo "Please use either PROD or TEST MODE, not $MODE"
exit
fi
done
So while I'd like to make it possible to manage RDS with Chalice, to help with your immediate need, would an doc/code example that walks through how to use RDS and Chalice together help you? I'm still having a little trouble following how can I best help you here.
This is not about managing any given RDS. This is about recognizing or asking if, a module accesses a database on AWS and then proposing a role with necessary permission which would allow the lambda to use that DB.
@mchassy has the correct idea.
Just allow us to set a RDS instance that the lambda can talk to, and then do the work of wiring that up as part of a deployment.
To start, make it error out if the RDS instance isn't already deployed and the option to use an RDS instance is configured to use one that does not exist. That way you can iteratively add options/defaults as needed after that works.
Got it, so have some easy way to say that a lambda function should have access to an RDS instance in your app, and then have chalice configure the appropriate permissions on that role. Makes sense, thanks for clarifying.
Similarly, I want my lambda to be able to connect to ElastiCache/Redis and am having the same failure. Please help!
Why was this closed? Nobody is assigned and no explanation was given.
Sorry ... I opened this, but I have found other tools/solutions which match my needs better. I should have simply unsubscribed.
Now that Chalice integrates with the AWS CDK, seems like there could be a useful code example here if someone wants to make one (and maybe this should be closed?)
Not a full integration yet, but I am working towards it :)
See here: https://github.com/Rassibassi/aws-cdk-chalice-react-rds-postgres-cognito
Sorry ... I opened this, but I have found other tools/solutions which match my needs better. I should have simply unsubscribed.
Hey @mchassy , will you share what the other solutions you found more suitable? Thanks in advance!
Problem Statement
I am trying to use chalice to update a table in mysql based on files being uploaded to S3. For the moment, I am not even interested in the content of the files. I just want to insert of update a line in my db to say that a give file has been uploaded.
Auto-generate roles
Chalice auto-generates a role with S3 and CloudWatch permission, but nothing about access to the DB.
Attempted workarounds
There is configuration to use one's own role and permission. I noticed the following
my-project-dev
my-project-dev
, but I get an error which say that the role can't be deleted.During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/markchassy/.local/lib/python3.7/site-packages/chalice/cli/init.py", line 512, in main return cli(obj={}) File "/home/markchassy/.local/lib/python3.7/site-packages/click/core.py", line 764, in call return self.main(args, kwargs) File "/home/markchassy/.local/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/home/markchassy/.local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/markchassy/.local/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, ctx.params) File "/home/markchassy/.local/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(args, *kwargs) File "/home/markchassy/.local/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func return f(get_current_context(), args, **kwargs) File "/home/markchassy/.local/lib/python3.7/site-packages/chalice/cli/init.py", line 205, in deploy deployed_values = d.deploy(config, chalice_stage_name=stage) File "/home/markchassy/.local/lib/python3.7/site-packages/chalice/deploy/deployer.py", line 344, in deploy raise ChaliceDeploymentError(e) chalice.deploy.deployer.ChaliceDeploymentError: ERROR - While deploying your chalice application, received the following error:
An error occurred (DeleteConflict) when calling the DeleteRole operation: Cannot delete entity, must detach all policies first.
{ "version": "2.0", "app_name": "top200", "stages": { "dev": { "api_gateway_stage": "top200", "manage_iam_role": false, "autogen_policy": false, "iam_role_arn": "arn:aws:iam::439359573308:role/top200-dev" } } }
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds-db:connect" ], "Resource": [ "arn:aws:rds-db:eu-west-1:439359573308:dbuser:arn:aws:rds:us-east-1:439359573308:db:top200-portal/top200admin" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "" ], "Sid": "5a1a5b736299483abb776aeee43f4a88" }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:::" } ] }