miere / cargo_lambda

A Cargo plugin to generate and package musl-based binaries that can be uploaded as AWS Lambda functions.
8 stars 1 forks source link

cargo lambda deploy #1

Open brainstorm opened 4 years ago

brainstorm commented 4 years ago

It would be fantastic to have this subcommand, here's how I do it for now (with python):

https://github.com/brainstorm/htsget-aws/blob/master/deploy/app.py

I guess that (very few?) lines of rusoto could just do the trick?

miere commented 4 years ago

Nice idea. So, guessing from the S3 Synth API call, you need a way to upload the package to S3. Have I guessed it correctly?

I reckon it’s quite easy to create indeed. I shall have free time by the second half of this week. I’ll make sure to include an ‘upload’ or ‘deploy’ subcommand on cargo_lambda.

Cheers

Miere Liniel Teixeira On 22 Jun 2020, 1:36 PM +1000, Roman Valls Guimera notifications@github.com, wrote:

It would be fantastic to have this subcommand, here's how I do it for now (with python): https://github.com/brainstorm/htsget-aws/blob/master/deploy/app.py I guess that (very few?) lines of rusoto could just do the trick? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

brainstorm commented 4 years ago

This CDK line takes charge of uploading the asset for you, but yes, the .zip with a bootstrap binary inside should be uploaded to S3 if you do deployment outside the CDK... I reckon that with Rusoto you can easily replicate that but you'll have to be careful giving the lambda access to the resources it needs (in my case, read access to a particular S3 bucket as stated on this line).

miere commented 4 years ago

I wonder how your use-case would be. Usually, this kind of toolchain (like cargo_lambda) relies on proper separation of concerns. In practice, it means that the bucket creation (usually) should be created from an external source (like Terraform, Ansible, CloudFormation, etc). We can definitely go further and provide the whole set of configurations, creating and managing both bucket and the lambda function but I'm afraid that would be crossing the boundaries on this small tool. By doing this I might need to cover all edge cases related to it - like bucket creation, lambda creation, updates and permissions, etc - making this niche and focused tool a bit hard to maintain on the long run.

S3 upload, on the other hand, makes sense to me. Unlike managing the bucket and the lambda function itself, the S3 upload is simpler, being just a single step with absolutely no side effects. Triggering the Lambda update after the package has been uploaded is kind of a grey zone as Amazon provides several ways to do that, CodeDeploy included.

What I did on this regard was to create a (terraform) module that relies on the generated package to update the function. On my primary use case, I'm using Application Load Balancer as an Http RPC layer for my Lambda functions. I'm not sure how this would suit your use case though. Hopefully, I'll be authorized to Open Source the module as well.

module "http_layer" {
  source = "*******/alb-rpc-to-lambda/aws"

  name = "${var.namespace}-api-${var.environment}"
  folder_with_lambda_packages = local.folders_lambda_packages

  vpc_id = local.vpc_id
  subnet_ids = local.subnet_ids

  endpoints = [
   { pattern = "/wallet/*", zip_file = "wallet.zip" },
   { pattern = "/ledger/*", zip_file = "ledger.zip" },
   { pattern = "/settlement/*", zip_file = "settlement.zip" },
 ]
}
brainstorm commented 4 years ago

Yeah, totally get it, there's no need to reimplement things like serverless.com or CDK in here. I'll stick with my CDK template instead of terraform for now (I prefer CDK)... OTOH, there's this SAM CLI cargo pullrequest ongoing:

https://github.com/awslabs/aws-lambda-builders/pull/174