Closed brennanerbz closed 7 years ago
I'm not sure I understand the question. ECS/Docker are separate from Lambda. That is, you wouldn't package your Lambda function into ECS/Docker.
Additionally, if you want to go the microservice route, I would not recommend using this library. It would be better to break your service up into multiple functions. For example: 1 function per resource and route based on the HTTP Method; or taking it even further and doing 1 function per HTTP Method per resource. If you want to get up and running quickly, simply use this library and refactor to individual functions later (best to design for this at the beginning)
On Mon, May 15, 2017, 8:36 AM Brennan Erbeznik notifications@github.com wrote:
Would you ever recommend packaging an app like this (Express -> Lambda) into a Docker container on ECS? Say I'm building the backend for a mobile app, and want to use the Microservice pattern.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/aws-serverless-express/issues/76, or mute the thread https://github.com/notifications/unsubscribe-auth/ABy6l54nMYsZjCkGaiaEo_SFxNaLdHUGks5r6HDzgaJpZM4NbTyc .
Could you elaborate on some of pros/cons of using this library vs "bare" Lambdas? Is it regarding performance, security etc.?
Personally, I found transition in the opposite direction (bare -> Express) to be extremely useful. Express allows elimination of the majority of boilerplate code (full request validation, compression, CORS, etc.) as well as general tidiness (simpler format of responses etc.). That also doesn't come at the cost of reduced security, if we still create a bunch of Lambdas each with its specific role/policy, and just map them to a single entrypoint (unlike in the given example, where all code does go through a single Lambda resource). Are there any considerations where the "bare" approach would still be "better"?
Another advantage is OP's use case where we could easily port the Express application to run outside of Lambda just by changing the listener (some people may want that extra flexibility).
Thanks
The complete pros/cons would justify a blog post, but most of it comes down to the benefits of microservices. You can choose different technologies for different services; fine-tune your functions' memory/cpu requirements optimizing for speed+cost; no Express overhead
Microservice architecture isn't always the right answer though. If you are a small team, the additional overhead will likely outweigh the advantages. Additionally, Express (and other frameworks) offer an entire ecosystem of tools and tutorials.
Makes sense. At this point I'm mostly interested in how much overhead Express produces in terms of latency - might need to run a few X-Rays to measure that, but if you have any typical numbers that would be really helpful! Everything else seems to be pretty much covered for our use case (I do still think of it as microservices though, besides the fact that all of the code is packed into a single ZIP and using the same runtime). Thanks again.
Ok, I've measured some latencies and they appear to be quite large. It may take on the order of 3 sec (for cold start) and ~500 ms (for "warm") between starting the Express "server" (i.e. immediately after Lambda entrypoint) and calling my actual (req, res) handler. And all I do in between is set up some middleware for Swagger validation, compression and CORS. So I may just return to calling "vanilla" Lambdas :/
between starting the Express "server" (i.e. immediately after Lambda entrypoint)
You shouldn't be starting the Express server inside your Lambda handler - you should do this outside of your handler. This way, your server is only started once (on cold start), and the same server is reused over the lifetime of the Lambda container.
Ok, thanks for the tip! I was doing that for better testability (so nothing gets set up in the global context), but it can still be optimized even when called from the lambda. For example, one can set a "flag" variable that indicates if the server has already been started, and then shortcut to the Express handler if it did. If I understand it correctly, the flag value should be preserved within the container. EDIT: there could potentially be a concurrency issue with this if two "cold" requests arrive at virtually the same time. though probably not, as Node.js runtime is single-threaded; instead, a second container would be spun up?
Even then, cold starts still take longer. Our application will have relatively small audience in the beginning (with a low request rate), so we may actually benefit from optimizing cold starts, and the additional overhead may not be worth it (as you suggested before).
Would you ever recommend packaging an app like this (Express -> Lambda) into a Docker container on ECS? Say I'm building the backend for a mobile app, and want to use the Microservice pattern.