Miserlou / Zappa

Serverless Python
https://blog.zappa.io/
MIT License
11.89k stars 1.2k forks source link

Lambda/Zappa Request Concurrency and Speed #1559

Open sibblegp opened 6 years ago

sibblegp commented 6 years ago

Context

I have noticed a dramatically increasing response time on even the simplest application when deployed on Zappa/Lambda. When running 10 requests simultaneously, they respond in ~140ms. When running 100 requests simultaneously, they respond in ~1000ms. The lambdas themselves seem to run in under 1ms. Considering that I use Zappa in production where 100 requests per second is not unusual, this is concerning. My current AWS account limit is 5000 concurrent lambda functions. I am not sure if this is a Zappa/Lambda/API Gateway issue but I do feel this is the right forum.

Example results:

ab -c 10 -n 1000 --ENDPOINT--

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       50   82  32.8     79     325
Processing:    40   61  15.3     58     118
Waiting:       40   59  14.5     56     118
Total:         90  143  37.3    136     392

ab -c 100 -n 1000 --ENDPOINT--

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       64  540 231.4    573    1767
Processing:    43  484 568.2    350    2889
Waiting:       42  358 593.3    172    2885
Total:        168 1024 690.5    931    3926

Expected Behavior

Increasing the number of requests up to the concurrent limit should not increase response time.

Actual Behavior

Response time increases almost linearly with the number of requests.

Possible Fix

Steps to Reproduce

Run this code in a zappa deployment:

from flask import Flask, jsonify

APP = Flask(__name__)

@APP.route('/')
def hello_world():
    return jsonify({'hello': 'world'})

if __name__ == "__main__":
    APP.run('127.0.0.1', port=6061, debug=True)

Then run:

ab -c 10 -n 1000 <ENDPOINT>

and compare results to

ab -c 100 -n 1000 <ENDPOINT>

Your Environment

Pip Freeze:

argcomplete==1.9.3
base58==1.0.0
boto3==1.7.55
botocore==1.10.55
certifi==2018.4.16
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
Flask==1.0.2
future==0.16.0
futures==3.2.0
hjson==3.0.1
idna==2.7
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.0
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.19.1
s3transfer==0.1.13
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.3.1
Unidecode==1.0.22
urllib3==1.23
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.46.1
dkhan11 commented 5 years ago

Any update on this issue?

lhadjchikh commented 5 years ago

Could this be the result of Lambda cold starts, as described here? As I understand it, Zappa's keep_warm setting only keeps one Lambda container warm. See https://github.com/Miserlou/Zappa/issues/1790.

Until Zappa has a fix for this, I'm trying out thundra-lambda-warmup to keep multiple containers warm, but I haven't had a chance to load test it yet.