Closed jingai closed 6 years ago
After a little playing around I got it working, I had some issues with the mac compiled versions of Crypto and a couple of other packages. Though that has been a problem for a while now, and I was able to use the precompiled versions from the lambda_packages repo that zappa uses.
I also had some cache issues but I am pretty sure that is because I stripped the lambda user to basic permissions (lambda execution only). I turned caching off and it seemed to work ok with the few testing I threw at it.
I suspect it would even be possible to provide a zip pre bundled with all the right bits for AWS so the user would only have to edit the config file and upload it through a browser (as I did), they wouldn't have to touch virtualenv or even the command line :)
@digiltd How are you creating the zip file to upload without Zappa? I just tried making one and it was 35MB.
I followed the instructions here: http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html
Downloaded the zip of this repo (wanted to free of any git stuff) created a new virtualenv, installed requirements (though had to get flask-ask manually as I don't think the pip release has the relevant changes to run it zappa free).
Then went into the venv site-packages and copied the relevant bits over into the root of the folder. Then zipped the contents of the folder (minus the venv folder of course (and any other "dist-info" left overs as well as all the .pyc files). The slight gotcha here is you have to zip the contents of the folder, not an actual folder (but this only applies to the root).
Also, because I use a mac some of the packages needed to be switched out with the ones from the https://github.com/Miserlou/lambda-packages repo... but I think only cffi and cryptography were required, and I did it manually (dragged from one folder to another (seems patronising, but the concept is a little bizarre when we rely so much on package management and build tools these days 😄 ). If you are pip installing on linux then I don't think that is a problem.
Here is the version I put together (with a blank config), in theory you would be good to go just uploading that. It also has the modified alexa.py (now lambda_function.py) file.
https://www.dropbox.com/s/5u02lv8xm2p67a5/kodi-alexa-lambda-bundle-blank-config.zip?dl=1
Whilst it does still seems a bit of a faff, only the person bundling it together would have to do it, not the end user.
When setting up the lambda, make sure you add the Alexa Skill Kit trigger, for the role, I used the basic lambda_basic_execution role rather than an admin role. I was never a fan of the way you had to give zappa administrator access, even with the early version of this skill that used `lambda-deploy' I disobeyed the instruction to give use an admin role to execute the lambda and stuck with basic execution :)
But because I wanted to try the S3 cache I had to add a policy to the role to allow this, and because I wanted to restrict access to a single bucket bucket (that I had already created with a random string at the end e.g. kodi-voice-cache-u8gpmsyrhetqqkuz
), I did this using the policy below. It might seem a little too "tin foil hat", and whilst I trust this Skill, it only takes a single mistake and (worst case scenario) the lambda has then wiped out every bucket, api gateway, lambda, user/role/group, vps, db, dns route and everything else not under control of my AWS superuser in one command with no undo. Well that might take more than a "single mistake" but giving the lambda an admin role makes it all theoretically possible.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::kodi-voice-cache-u8gpmsyrhetqqkuz"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::kodi-voice-cache-u8gpmsyrhetqqkuz/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListObjects"
],
"Resource": "*"
}
]
}
I am not sure, but maybe by giving the lambda access to the bucket means you might not need the separate aws credentials in the config, though that would require a bit of work to change the way the cache worked... and would only apply to this scenario (lambda hosted, using S3). Is there a difference between botocore and boto3? boto3 is bundled with the aws-sdk which is included with all lambdas and so wouldn't need to be packaged.
Any questions, let me know. I am by no means an aws expert, but I do use it day to day.
Then went into the venv site-packages and copied the relevant bits over into the root of the folder.
How do you decide what's relevant? Just do a pip freeze
or something?
old school
just set up the venv, made a note of what was in there, and included everything else (minus the pip and setuptools stuff)
Okay this is integrated now.
Rather than going through Zappa -- see here.