Closed fivepapertigers closed 5 years ago
Hi @fivepapertigers ,
Is your issue related to #117 ? How many lambdas are you warming up?
I wouldn't be opposed to having an option to choose the invocation type as long as it doesn't affect how the lambdas are being warmed up.
Currently, all the lambdas are warmed in parallel (not sequentially) so the number of lambdas shouldn't really affect that much the time taken by the warmer function.
Hm, that may very well be my issue. My lambdas are running in VPC currently, so I may have just misdiagnosed the issue. I'll keep a close eye out over the next few days to see if I get any more timeouts now that I increased the plugin's timeout.
I see your point about the concurrency, too - theoretically the warm-up plugin should only be as slow as the slowest downstream invocation.
If it happens again, please create an issue to AWS so they can assist.
I've been trying to replicate the issue for a year unsuccessfully (with lambdas inside and outside of a VPC).
Gah, that's brutal, sorry to hear. I will keep an eye out. Again, great plugin - it's nice to see a serverless plugin that is well maintained. 👏
Sorry for the radio silence.
I'm still not 100% convinced of how invoking the lambdas would improve this plugin.
Please, correct me and convince me of why async invocations would be good. The implementation is trivial to do. I just want to understand the actual benefits.
No, I think you make a good point. I'll close this and reopen if I have reason to, thanks.
Hi, thanks for the plugin.
I think this feature request should be reopened.
I can't see why the warmer would be invoking the functions with RequestResponse
. It's not a health check, warmer should just fire and forget.
Currently, all the lambdas are warmed in parallel (not sequentially) so the number of lambdas shouldn't really affect that much the time taken by the warmer function.
Regarding statement above, lambdas are not warmed in parallel but concurrently. Set the warmer memorySize to 128
and concurrency to 50
or more for a lambda function that has cold start duration of 4+seconds and warmer will struggle to invoke 50
instances concurrently. That causes the lambda containers to be used and you'll never get 50
concurrent executions.
I think a configuration to set InvocationType
would be very useful, at least we get to choose how we want the warmer to work.
Thanks in advance.
First off, thanks for the great plugin.
We started noticing that the warmup Lambda itself was timing out. We've increased the timeout to take care of that, but ideally we'd love to keep the plugin Lambda's runtime as short as possible and not have to worry about scaling it to match an increasing number of functions in the service.
I wanted to suggest warming up the downstream Lambdas using an async invoke (
InvocationType: "Event"
) instead ofRequestResponse
- ideally through a configuration option. In our case, we are happy to sacrifice the plugin's reporting on runtime-level success/failure of any downstream functions, since we monitor errors on those functions anyway.Happy to open a pull-request that introduces that option if it sounds reasonable.