Open sibblegp opened 6 years ago
Any update on this issue?
Could this be the result of Lambda cold starts, as described here? As I understand it, Zappa's keep_warm
setting only keeps one Lambda container warm. See https://github.com/Miserlou/Zappa/issues/1790.
Until Zappa has a fix for this, I'm trying out thundra-lambda-warmup to keep multiple containers warm, but I haven't had a chance to load test it yet.
Context
I have noticed a dramatically increasing response time on even the simplest application when deployed on Zappa/Lambda. When running 10 requests simultaneously, they respond in ~140ms. When running 100 requests simultaneously, they respond in ~1000ms. The lambdas themselves seem to run in under 1ms. Considering that I use Zappa in production where 100 requests per second is not unusual, this is concerning. My current AWS account limit is 5000 concurrent lambda functions. I am not sure if this is a Zappa/Lambda/API Gateway issue but I do feel this is the right forum.
Example results:
ab -c 10 -n 1000 --ENDPOINT--
ab -c 100 -n 1000 --ENDPOINT--
Expected Behavior
Increasing the number of requests up to the concurrent limit should not increase response time.
Actual Behavior
Response time increases almost linearly with the number of requests.
Possible Fix
Steps to Reproduce
Run this code in a zappa deployment:
Then run:
and compare results to
Your Environment
pip freeze
:zappa_settings.py
:Pip Freeze: