Open vespertilian opened 3 years ago
If you don't have the time I also understand. As this is free software. So thanks anyway for that.
The way I've done lambda layers is not very useful, to be honest. It only works if you have a piece of custom code which you want to deploy as a layer, not if you want to package one or more npm modules as a lambda layer (which is much more common.)
This is something I've done manually a few times but never integrated into nx-aws. I'd be happy to add this into nx-aws, although, I wonder if it might be possible to get the majority of the way there just using run-commands... this is what I have in mind. You'd have a layer project and the project referring to the layer.
Layer project
dist
directoryPoint three is where we might make it a little bit nicer in nx-aws, by automatically rewriting a project reference to point to the dist directory. (TBH, I don't love the way project references work in nx-aws, it feels a bit too magical and not in keeping with CloudFormation, but it is awfully useful.)
Would that tick the box?
@studds Your plan sounds good I will give it a go and let you know how it turns out. Thanks!
@studds So sadly it seems to just endlessly build the image when I get the directories set up correctly. I am going to try to set this up separately this afternoon outside of NX to check my config. Maybe Webpack is causing issues.
Good chatting today @vespertilian. It's clear that the way that nx-aws currently handled layers is broken. This is further complicated because the way sam handles layers is broken.
My preferred solution would be to hook in sam build
, but given the issues above I'm not sure that's a complete solution.
The workaround we used today (run npm install
in the src
directory, and use "assets" to copy it across to the dist
directory) works, but is kinda ugly. I don't want to have to run npm install
in the src
directory.
I do like your suggestion of essentially generating layers using config in workspace.json
/ angular.json
, however layers configured in template.yaml
should "just work".
So my first priority is to fix the way layers are handled in nx-aws so that valid layers "just work". This may be a breaking change to anyone who is using the current functionality, although tbh the current functionality is not terribly functional.
Great talking to you too. Thanks. Just let me know if you have some time and we can hack on it together.
the way sam handles layers is broken.
i read through the issues. besides these, there seem to be cross-env build difficulties et al. that can only be resolved by running package installation commands inside a docker container mirroring aws's lambda image.
i'd appreciate any insight anyone might have into the issue of building certain packages (npm in my case) on windows/mac and then having them fail during sam build
with, e.g. invalid ELF header
. Is current best practice to run npm i
inside a lambda-like docker container and ensure those packages are the one's bundled into sam build
commands?
Secondly is this an issue others would like to be addressed by sam
itself on build
? personally, i don't understand why i have to run npm i
at all. sam could run this on build
in my opinion. i'll probably file a ticket in sam-cli but i would appreciate feedback or any thoughts on the matter before i make a proposal. or if someone knows of a better issue thread, i'd be grateful for the link.
for context, i'm trying to stand up a local dev environment to run lambda tests. so far i have a single pg docker container for db calls and would like to keep these to a minimum. running a lambda container for the sole purpose of npm/pip installing and updating your packages seems unnecessary. sam already does most of this work, and has access to a correctly provisioned docker container to do the installing and updating in.
thanks!
@HP4k1h5
So you can install the correct package by passing some command line arguments (see below), without having to spin up a custom docker image. Assuming the binaries are pre-built. Which they have been for all the packages I have needed to use that have binaries, like bcrypt and argon2.
npm install argon2 --save --target=12.2.0 --target_arch=x64 --target_platform=linux --target_libc=glibc
This forces the install of the linux x64 binary even if you are on a mac / windows computer. This works with SAM local as invoke and serve-api run the code through the docker container. For unit testing you just need to add the package as a dev dependency.
As a bit of a hack you can add this to the postinstall script method in you package JSON and it will install when you run NPM install.
{
"name": "graphql-libs",
"main": "index.ts",
"dependencies": {
"argon2": "^0.27.0"
},
"scripts": {
"postinstall": "npm install argon2 --save --target=12.2.0 --target_arch=x64 --target_platform=linux --target_libc=glibc"
}
}
You can see this in action with a plain SAM project here: https://github.com/vespertilian/sam-layers-test
Are you using nx-aws? @studds and I are planning on removing some old layer code and then trying to work out how to make the experiance of creating layers more enjoyable.
We hacked together something a few days ago, and I have been using that, it's not great. SAM local start-api is a bit slow when it has to rebuild layers every time locally. So I am thinking that a better option would be to publish the layers and then used the published version locally. I belive that it get's cached that way. As yet though I have not had time to verify this.
@vespertilian big thanks for your answer. i will try with the npm i with flags you listed. i have tried some of them before but i can't find documentation on them in npm.
after some trial and error i was able to get the npm package working in sam by installing the npm pkg inside a node:12.15
docker image, and then copying the node_modules to the appropriate node_modules
dir in the sam repo. i tried several images but only node:12.15
worked as intended. this solution can be reduced to a docker-compose/Dockerfile with appropriate volume
s and RUN
cmds, and a script to port the built packages to their intended destinations, but leaves a great deal to be desired since starting the dev env becomes that much more error-prone for fellow devs. i don't know what the answer is but if the npm i flags can take docker out of the equation i will probably do it that way.
im new to sam, and have not tried nx-aws, but i will be looking into it this weekend.
publishing the layers and using cloud versions could be a good idea, but if a dev doesnt have network connection, it might create problems unless there was a fallback.
thanks again for your help and time.
@HP4k1h5
Yoru welcome. With nx-aws currently, the main version is not working well with layers. So don't try to use it until @studds and I can get a newer version out.
So my first priority is to fix the way layers are handled in nx-aws so that valid layers "just work". This may be a breaking change to anyone who is using the current functionality, although tbh the current functionality is not terribly functional.
I'm not going to be able to make this "just work" because it seems like the sam-cli handling of these is fundamentally broken: https://github.com/aws/aws-sam-cli/issues/2222
I've created a PR that would "in theory" make lambda layers work, but essentially at the moment with sam-cli you have to choose between whether you want layers to work locally or when deployed. Kinda stupid.
I've also created an example where I tried to build a valid layer independently of sam-cli, to get around the inconsistencies between start-local and deploy in sam-cli. No dice. It worked when I deployed it, but not locally. Ugh. It seems like layers in sam-cli are just straight up broken. @vespertilian have you had any more luck on this?
Hey guys, great repo.
Chiming in here since I have just spent the last few hours diving in to get this working and I have a few thoughts. I'd love to know where I've gone wrong and how I can help.
Make a layer from another NX app/lib in my monorepo.
Should be as easy as: nx g lib lambda-layer
. Then in the template could do:
MyLambdaLayer:
Properties:
ContentLib: lambda-layer
For simplicity sake, I started by just making a new @nx/node:app
in the project that way I can use default nx node builder. I can use the new generatepackagejson flag to have it automatically generate a package.json file for me in this "layer". I have overwritten the webpack of this node app to just copy the files and rename a few things so it is in the format nx-aws
expects. So I'm left with my layer's typescript files in a dist/layer folder with the new package.json file. This is ready to compile as a layer. For now my tempalte.yml looks like this:
MyLambdaLayer:
Properties:
ContentUri: ../../dist/apps/lambda-layer
I've noticed that this line might be causing some issues: https://github.com/studds/nx-aws/blob/c4697ab2450c5e7fb576142fad0bbfac6009b864/packages/sam/src/builders/build/get-entries-from-cloudformation.ts#L120
Since options is globally used, when this layer updates that option, it is updating it for all functions and layers to be built. I'm not sure if that is intended since each function/layer may need to compile to a different location.
Because of this, whenever I have a layer in the project, my functions dont seem to rebuild correctly.
I'd like to use the generatepackagejson
functionality within NX to generate a package.json file for me from the lib code. nx-aws
could use that to create the package.json before compiling.
I'd like to play around with this repo but I am having a little trouble getting up and running. I've used the yarn link
in my other repo but the dev cycle seems a little off to me. Could you share the best method to iterate/develop on this project. I'd love to submit a PR or help out since this is clearly a valuable NX addition.
Thats all for today. I hope to help! Message me if you'd like to chat!
@studds - I'd love to chat if you have the time.
Hey @dbrody,
So, the way I have layers currently working until we get the proper fix is I just co-locate them next to my lambda functions. I install all the dependencies in that local embedded folder. Then copy them as assets into the dist directory so they are in the correct place for deployment. @studds helped me create a forked branch to achieve this. https://github.com/studds/nx-aws/compare/master...vespertilian:layers-update
Maybe this get's you unstuck for now? While we sort out the longer-term solution.
@vespertilian Nice. I'll take a look.
For that solution, you have to manually track and update the package.json in the lib? Also, how do you format your tempate.yml file?
I'm very interested in the "long term" solution.
Hey @dbrody @vespertilian - I've been working away at a fix and a bunch of other enhancements. The problem is that I've got my branches twisted up 😦
I should be pushing out a release later today which will unblock layers, and I'll be adding a suggested approach to local dev as well.
@studds Awesome. Any way for me to send you a private message? I couldn't find an easy way on GitHub. Would you have time to chat?
So close and yet so far.
Where I'm at:
yarn add @nx-aws/core@next @nx-aws/s3@next @nx-aws/same@next
. I'd recommend using this.Where I'm going
I think the best way to do layers (as shown in that example) is to:
importStackOutputs
option on the @nx-aws/sam:deploy
and @nx-aws/sam:execute
builders - see the example for how this works.Where I'm stuck
The f*&%^&$$ SAM CLI uploads a corrupt zip for the layer, and so nothing works. Argh! At least I get the same failure when deployed and locally, so that's an improvement.
@vespertilian: can you have a look at this and see if you can figure out how to get SAM CLI to deploy a valid layer? I know you've spent more time with it than me so this might be super obvious to you.
Etc
@dbrody I've added instructions to the readme on how I develop locally - it's not perfect by any means, but it works (for the most part). Raise a separate issue if you've got questions on that.
@studds Awesome. My first thought is does that still require manually updating the layer's package.json inside its folder? I'd love a solution where that part of the layer/lambda is handled by NX somehow.
The example is currently geared up just to provide a dependency (in this case, bcrypt) which uses binaries and so won't work with the "normal" webpack approach, because you can't webpack a binary (? afaik). If you wanted to package up a library and avoid updating the layer's package.json
manually, then you'd just need to add in the @nrwl/node:package
builder first, which analyses your code and injects dependencies into package.json
@studds sure I can take a look, just started to mess around, and will keep looking into it first thing tomorrow.
@studds
This is my hack to be able to NPM install the correct binary for the lambda Linux instance.
I agree that referring to the ARN is better than deploying every time and a separate stack is a good idea. It would be nice if we could auto-reference the ARN so that it gets updated "if and when" we make changes and need to deploy a new layer using the nx affected
command.
The ARN also makes local development quicker as the package gets cached locally when serving your lambda via SAM.
I was able to deploy the sample you branch you referenced. A couple of things. You don't need to have the index file and require statement for the dependency in the lambda layers file. I just have a package.json and the node modules as shown below.
You also need to set the dependency to be an external dependency or web-pack will bundle it, I think this was your issue.
@dbrody, @studds
The only way I can see to automatically sync the lambda layers and NX package.json dependencies is to have a special NX-AWS config, where you specify the dependencies and whether they are a binary and require the post-install hack. We then read the package.json getting the version of the dependency and the config file and create a new package.json with the dependencies and the binary setup with the post-install script hack.
Sample NX lambda layer config
{
deps: ['dep1', 'dep2']
binaryDeps: ['argon2']
binaryPostfix: ''--target=12.2.0 --target_arch=x64 --target_platform=linux --target_libc=glibc"
}
@vespertilian I wish I had more to contribute right now other than that sounds like a great direction. If nx-aws could account for both scenarios in and easy to integrate way that would be awesome:
OK, got it working by simply bypassing sam cli a little bit more and pointing it directly to a pre-built zip file. Note that this build will almost certainly need updates to work on Windows: it's a proof of concept.
I've updated https://github.com/studds/nx-aws-example/tree/bcrypt-layer-example to handle both binary dependencies, and standard nx libs as layers.
It would be nice if we could auto-reference the ARN so that it gets updated "if and when" we make changes and need to deploy a new layer using the
nx affected
command.
We can do this using --withDeps
- see update readme in example.
The only way I can see to automatically sync the lambda layers and NX package.json dependencies is to have a special NX-AWS config
The @nrwl/node:package builder handles this - see updated example.
post-install script hack.
I believe you can just npm i
with the flags specifying the architecture, per the example repo. Seems to be working with bcrypt. No need for the post-install script hack.
@studds Cool, I will try to play around with it a bit this weekend.
@studds Looking at it now, sorry didn't get to it over the weekend I was visiting my folks and could not find the time. Thanks for your work on this.
A couple of things, can you commit your nx-aws build into the sample repo? The package.json references these files on your computer. I was trying to deploy it this morning. I missed this last time I was playing around because I had an older cloned version of this repo.
"@nx-aws/core": "/Users/danielstudds/Code/nx-aws/nx-aws-core-0.9.1.tgz",
"@nx-aws/s3": "/Users/danielstudds/Code/nx-aws/nx-aws-s3-0.9.1.tgz",
"@nx-aws/sam": "/Users/danielstudds/Code/nx-aws/nx-aws-sam-0.9.1.tgz",
maybe just put the files into a custom packages folder?
"@nx-aws/core": "./custom-packages/nx-aws-core-0.9.1.tgz",
"@nx-aws/s3": "./custom-packages/nx-aws/nx-aws-s3-0.9.1.tgz",
"@nx-aws/sam": "./custom-packages/nx-aws/nx-aws-sam-0.9.1.tgz",
I believe you can just
npm i
with the flags specifying the architecture, per the example repo. Seems to be working with bcrypt. No need for the post-install script hack.
What magic is this? I will try it out when I get the correct versions... but I don't understand it at all.
Thanks again!
A couple of things, can you commit your nx-aws build into the sample repo?
I could, but I won't 😄 The latest versions from npm include these changes. I've installed them in https://github.com/studds/nx-aws-example/tree/bcrypt-layer-example and repushed. Apologies for the broken commit.
I believe you can just
npm i
with the flags specifying the architecture, per the example repo. Seems to be working with bcrypt. No need for the post-install script hack.What magic is this? I will try it out when I get the correct versions... but I don't understand it at all.
It's the same magic as installing a single package with the arch flags. (Caveat: I've never made a binary package so I might have some details wrong, but this is my understanding.) If I run npm i --target=8.1.0 --target_arch=x64 --target_platform=linux --target_libc=glibc
then any time node-pre-gyp
is invoked to get a binary, then it will use the target arch, platform and libc passed to npm i
. There's no need to install packages one by one with these commands. The only time this approach breaks down is if there's no suitable prebuilt binary for a package, in which case I think you'd probably need to run npm i
in a suitable container - but I could be wrong there, haven't come up against that scenario as yet and hope I never do!
@studds
Just pulled down the code and when I try to build it I get the error "required main is missing" Do you not get this? Can you pull a fresh instance down into a new folder and check?
It's the same magic as installing a single package with the arch flags. (Caveat: I've never made a binary package so I might have some details wrong, but this is my understanding.) If I run
npm i --target=8.1.0 --target_arch=x64 --target_platform=linux --target_libc=glibc
then any timenode-pre-gyp
is invoked to get a binary, then it will use the target arch, platform and libc passed tonpm i
. There's no need to install packages one by one with these commands. The only time this approach breaks down is if there's no suitable prebuilt binary for a package, in which case I think you'd probably need to runnpm i
in a suitable container - but I could be wrong there, haven't come up against that scenario as yet and hope I never do!
Running this command will install the correct binary for Lambda, but if you then try to use that binary locally I don't think it will work (as it's for Linux) also assuming you save it the package.json, the package only knows you want Bcrypt so the next developer who pulls it down would deploy the package that their machine requires (I think). I will still try to play around with your solution, it would be great if I am wrong about this.
I will be around tomorrow and Friday if you want to try to go through any of this in real-time. Just let me know.
Oh, I am a big fan of being able to pass the s3 bucket via the command line. Very nice :smile: looking forward to using this.
Just pulled down the code and when I try to build it I get the error "required main is missing" Do you not get this? Can you pull a fresh instance down into a new folder and check?
Fixed. I need to add some automated tests to nx-aws.
Running this command will install the correct binary for Lambda, but if you then try to use that binary locally I don't think it will work (as it's for Linux) also assuming you save it the package.json, the package only knows you want Bcrypt so the next developer who pulls it down would deploy the package that their machine requires (I think). I will still try to play around with your solution, it would be great if I am wrong about this.
You are absolutely right, that's why this command is run as part of the build in a dist directory, and is only used for either running in sam local
, or deploying to aws. The main dependencies are all installed as per usual.
Fixed. I need to add some automated tests to nx-aws.
PRs welcome 😄
@vespertilian I'm pretty happy with how this is working now - I've merged in my PR. Have you had a chance to look at it?
@studds Hey, no not yet. Sorry been hectic with some contract work. However, this is important and I really appreciate the work you have done on it. I will look at it now.
@studds
I see where the magic is now... right here:
npm install --target=12.2.0 --target_arch=x64 --target_platform=linux --target_libc=glibc
I did not know you could do for npm install on the entire package.json, makes sense, this will totally work for a layer, very clever.
I think this will work for me, I will be testing it out shortly on my actual code repo, stay tuned.
@dbrody, are you still trying to use this? I think we have it all sorted out.
@vespertilian - Right now its not using layers but I'd like to. Is the update to better support layers?
@dbrody Yeah, @studds added better support. I am working on some documentation.
Sample here: https://github.com/vespertilian/nx-aws-example/tree/nx-aws-bcrypt-layer-example
I will try to add more soon but I am moving this weekend. Hopefully, this is good enough to get you started. Any feedback you have for the documentation is welcome.
About a month ago, a small project I'm working on decided to move from Heroku deployment to AWS and we've also been studying monorepo, which is basically how I landed here looking for best practices or even just how to build and use AWS Amplify and Lambda with Angular, Node, and MongoDB. It's been a journey sucking on a firehose of information gleaned from AWS docs, tutorials, medium articles and stackoverflow.
In another life, I built a complex application specific language along with its build process (imake and make). But, we didn't have the instantaneous partial build and run turn arounds available now.
To get my head around lambdas, layers and AWS, I built a repository containing working code that has AWS Lambda (one handler plus a layer), with jest tests that run locally (unit tests and with sam local start-api
) and on hosted lambdas in AWS. Can grab a mongo db connection string out of the Secret Store and connect to cloud atlas, too.
I wrote best practices above because I'm all over the place with my source code structure thoughts. What code to break out into an nx library? What is the appropriate abstraction at which to slice a Lambda Layer? What local development process to use? How continuous should the "C" in CI/CD be?
First off, here's the repository. It has instructions on how to build and test it. I don't include instructions for AWS of Cloud Atlas setup. It's a simple piece of software. REST APIs through API Gateway into a Lambda function that handles all of them. It connects with Cloud Atlas after fetching the MDB connect URI from the secret store and does what REST APIs do. There's no front end software, all backend.
Although nothing earth shattering, as a proof of concept (me proving to myself that I can do something useful in AWS like understand how API Gateway and Lambda and Layers work), it's just fine. I did not use nx to set this up. That's a different proof of concept that I had to take a break from when trying to learn AWS.
On to why this thread caught my eye.
The monorepo concept feels really great and, if we are going to use AWS, I'd love to fit AWS backend development into that model. My initial experience with Amplify left me feeling like Amazon doesn't quite get what monorepo means. However, that could have been pilot error. Meanwhile, fighting through trying to understand their broad offering for the backend was a massive undertaking. Now that I have a working version with a Lambda and a Layer, I feel much more confident in being able to cobble together an environment with a library and/or backend application that works with AWS.
That said, it will be cobbled.
The Layer I built in the repository is just the mongo db and AWS node drivers. About 4.3MB when built and loaded into AWS, I think. I suspect tree shaking could knock that down significantly. Smaller Layer would load faster, reducing cold start time, although there are other ways of solving that problem (like keeping Lambdas warm). Really, however, this goes to the heart of the library-layer relationship: what makes a good abstraction for code development on the backend and bundling in the hosted environment?
Next question: should I only test in sam local start-api
or include an extra step and test with Node/Express or NestJs, then sam local, then sam hosted? The latter gets me automatic, incremental rebuilds with typescript. I can code entirely in typescript on the front and backend. Once ready, I can use the /dist/*/.js compiled objects with sam local somehow (I'm not entirely sure how) and then deploy for hosted testing.
The advantage is that all typescript and fast feedback cycle. The problem is that I'll miss idiosyncrasies of the AWS environment and could miss some big ones until I get there. The disadvantage of the AWS (sam) local environment is that I cannot run typescript unless I have something watching the code and spilling to /dist. Also, I'm guessing the code needs maps for the editor? BTW, these evil martian guys did a write up Serverless TypeScript: A complete setup for AWS SAM Lambdas. It's been a while since I played with Makefiles. However, it looks like they've got some good ideas and although it's not a monorepo, I'm wondering if there's a way it could be incorporated into an nx plug-in.
I wonder if each AWS sub-thingy (app, library/layer) should have its own template(.yaml/json) or if there should be an overarching template that runs them all. And, I don't know how template inclusion works or if that should factor into this, as well. I am novice experience level with AWS and templates for Cloud Formation, so perhaps some of these questions are out of ignorance.
It seems much of the infrastructure and tooling is available to build a reliable, productive AWS plug-in for nx. I'm not sure to what extent there should be a specific method for laying out AWS integrated components into the larger mono repository, or if there can be multiple ways to slice and dice inclusion.
As I've learned what I need to know about AWS, I'm confident I can move back to my other project and duct tape something together that meets my needs. I'd love to help with what (I think) you're trying to do here to whatever extent I can. This area is ripe for improvement and a really good AWS plug-in (or AWS lambda plug-in) would go a long way in making it easy for people like me to pick up the code and start running.
what a great lecture this thread is... Any advance on this?
I'm struggling to implement lambda layers in my monorepo based on nx-serverless template.
I have services
folder and libs
folder, also I have one serverless.base.ts
at the root, and then every service extends of it.
Also I'm writing serverless templates with Typescript. I'm building the packages with serverless-esbuild
also.
I would like to have the binary target "rhel-openssl-1.0.x"
for Prisma in Lambdas on a Lambda Layer, and then also add the Prisma Client.
I'm deploying with github actions workflows and using there the nx-serverless repo commands that are sls package and then sls deploy.
Any insight or idea would be very much appreciated, regarding the much interesting and educational this thread is.
Edit:
As for now I'm just adding the 2 files needed to run the Prisma Client to the lambdas, and it's working, but if everything grows, this is not optimal at all... now every Lambda weighs something between 20~35MB. without the "rhel-openssl-1.0.x" file it downs the weight 15MB more or less...
I'm researching for some github actions that could help me achieve this. Like: https://github.com/KillDozerX2/aws-lambda-publishlayer but as I'm not a totally expert with Typescript, Esbuild, or sls package, there is something I'm missing and I'm screwing everything up :P
Best regards!
Hi @Markkos89 - apologies for the slow reply.
I've added experimental support for building lambda layers: https://github.com/studds/nx-aws#building-lambda-layers---experimental
There's a little light documentation, but no example as yet.
The fundamentals are:
ng generate @nx-aws/sam:application {name}
@nx-aws/sam:build
to @nx-aws/sam:layer
template.yaml
to specify the lambda layer.Hi @Markkos89 - apologies for the slow reply.
I've added experimental support for building lambda layers: https://github.com/studds/nx-aws#building-lambda-layers---experimental
There's a little light documentation, but no example as yet.
The fundamentals are:
- Generate an app using
ng generate @nx-aws/sam:application {name}
- Change the builder from
@nx-aws/sam:build
to@nx-aws/sam:layer
- Add a package.json with your dependencies in the root of the app
- Update
template.yaml
to specify the lambda layer.
thank you for the reply! The example is great!!
I went more for serverless framework for now, but I will give it a try !!!
So I am struggling to get an example of lambda layers working.
My attempt is on the lambda layer branch of this repo https://github.com/vespertilian/nx-aws-example/tree/lambda-layers which is a clone of your example repo.
My real goal is to get bcrypt and similar libraries that require C++ binaries to work with nx-aws. So even if I get this working I think there is still some work I would need to do.
I have done a similar thing once before with Apex lambda .. now deprecated.
It involves installing the node module for a specific target for deployment. Something like this
npm install bcrypt --save --target=8.1.0 --target_arch=x64 --target_platform=linux --target_libc=glibc
Then excluding that from the Webpack build.
It would be great to have a layer that could handle this for me.
Any chance we could chat again? I could buy you a coffee or send some $$ you way. Bit stuck and I feel like you could unstick me quick smart. Again just email vespertilian@gmail.com