Miserlou / Zappa

Serverless Python
https://blog.zappa.io/
MIT License
11.89k stars 1.2k forks source link

[Feature Request] Add EFS Support #2128

Open dleber opened 4 years ago

dleber commented 4 years ago

Context

AWS recently announced the ability to attach EFS (Elastic File System) to lambda instances (more info).

It would be great if Zappa offered the ability to attach an EFS to lambda instances in a VPC.

Expected Behavior

In zappa_settings, specify an optional EFS config, including

  1. EFS mount path
  2. An EFS id
CaseGuide commented 4 years ago

This would indeed be fantastic. Is there any way to add this via the Zappa config?

CaseGuide commented 4 years ago

I did get EFS working with a Zappa-deployed Lambda function by creating a file system roughly following this tutorial .

What's really cool is that with Zappa, you don't need EC2 to load files in it. You can simply upload the desired files to S3 and move them from a bucket your function has access to using the zappa invoke command. Something like: zappa invoke production "import boto3; s3 = boto3.client('s3'); s3.download_file('BUCKET_NAME', 'FILE_NAME', '/mnt/dir/FILE_NAME')" --raw should do the trick.

May even be a way to do this at deploy time, haven't put any thought into that yet. Could be as easy as executing a function to move them in response to an upload to a certain bucket like in this example.

mpizosdim commented 3 years ago

that would be a useful feature. It could handle heavy libraries such as tensorflow.

millarm commented 3 years ago

Different to how slim_handler works? Which loads the data from a ZIP file in S3 at runtime?

mpizosdim commented 3 years ago

slim_handler has the limitations of max size 250MB. Tensorflow 2.x.x is more than 500MB. You cant deploy it that way.

millarm commented 3 years ago

I thought the slim_handler limit was actually 500Mb -> as it deploys onto the /tmp dir (I can't remember so it might be the 250Mb unzipped limit for native lambdas)

From memory lambda layers are also hit by the 250Mb limit as they are deployed natively, so the EFS model would work neatly for deploying large elements that will still run fine in Lambda's memory limits.

tobiasblasberg commented 3 years ago

Is there any update on this issue? Having the ability to deploy lambda functions larger than 500mb with zappa using EFS or the recently released container environment would be fantastic.

p-schlickmann commented 3 years ago

I did get EFS working with a Zappa-deployed Lambda function by creating a file system roughly following this tutorial .

What's really cool is that with Zappa, you don't need EC2 to load files in it. You can simply upload the desired files to S3 and move them from a bucket your function has access to using the zappa invoke command. Something like: zappa invoke production "import boto3; s3 = boto3.client('s3'); s3.download_file('BUCKET_NAME', 'FILE_NAME', '/mnt/dir/FILE_NAME')" --raw should do the trick.

May even be a way to do this at deploy time, haven't put any thought into that yet. Could be as easy as executing a function to move them in response to an upload to a certain bucket like in this example.

Hey, how did you get your files from s3 to efs? Im getting this error while trying to run your invoke example

ConnectionError: HTTPSConnectionPool Max retries exceeded with url (Caused by NewConnectionError)

My bucket is public, everyone can access, besides my lambda function, any tips?