Closed danmermel closed 4 years ago
have started this... now need: Potentially remove docker bit Have only one package.json and one npm install rationalise the prepare script (see above)
There is another angle here: can we let terraform do the deploy of lambda?
This TF will zip up a directory
data "archive_file" "triggerOnUploadLambdaZip" {
type = "zip"
source_dir = "lambda/triggerOnUpload"
output_path = "lambda/triggerOnUpload.zip"
}
and then you can add the code like this...
resource "aws_lambda_function" "funguyLambda" {
filename = "lambda/triggerOnUpload.zip"
source_code_hash = data.archive_file.triggerOnUploadLambdaZip.output_base64sha256
function_name = "triggerOnUpload"
role = aws_iam_role.funguyLambdaRole.arn
description = "triggers meshroom build on file drop in funguy S3"
handler = "index.handler"
runtime = "nodejs12.x"
}
I cannot quite remember why we did it separately and/or whether the above will do us... to discuss!
You can run terraform on deploy (in Travis) if you like - unfortunately, it doesn't work properly (the taint thing....). But it could upload the code too if it wanted. We currently have a separation between infrastructure which is triggered manually and the code which is automated.
There seems to be a lot of repetition in this... we can turn it into a more streamlined script.
./deploy.sh
should have an array of clue types (anagram, container etc)../<cluetype>/prepare.sh
This will save us modifying bash scripts many times.