rubyonjets / jets

Ruby on Jets
http://rubyonjets.com
MIT License
2.6k stars 181 forks source link

Jets 3.0.17 significant increase in Lambda duration #597

Closed tlhampton13 closed 11 months ago

tlhampton13 commented 3 years ago

Checklist

My Environment

Software Version
Operating System deployed to AWS
Jets 3.0.17
Ruby 2.7.2

Expected Behaviour

After upgrading Jets from 3.0.13 to 3.0.17, we noticed Lambda durations increase significantly for all endpoints in our REST API.

Current Behavior

After upgrading Jets from 3.0.13 to 3.0.17, lambda durations should be similar to before deployment. Lambda cost for our system increased 8 fold after the upgrade.

Step-by-step reproduction instructions

Code Sample

Duration of all lambdas in our system increased significantly ... even when no code changes were made, other than the Jets upgrade.

Solution Suggestion

Fix performance issue.

tongueroo commented 3 years ago

Deployed 2 demo jets apps:

Not seeing a significant difference:

~/environment/demo13 $ jets deploy
~/environment/demo13 $ jets url
API Gateway Endpoint: https://2zb5v74463.execute-api.us-west-2.amazonaws.com/dev
~/environment/demo13 $ time curl https://2zb5v74463.execute-api.us-west-2.amazonaws.com/dev
{"jets_version":"3.0.13","ruby_version":"2.7.4"}
real    0m0.097s
user    0m0.006s
sys     0m0.007s
~/environment/demo13 $ time curl https://2zb5v74463.execute-api.us-west-2.amazonaws.com/dev
{"jets_version":"3.0.13","ruby_version":"2.7.4"}
real    0m0.081s
user    0m0.009s
sys     0m0.005s
~/environment/demo13 $ cd ../demo17
~/environment/demo17 $ jets deploy
~/environment/demo17 $ jets url
API Gateway Endpoint: https://jnhxcpxcrk.execute-api.us-west-2.amazonaws.com/dev
~/environment/demo17 $ time curl https://jnhxcpxcrk.execute-api.us-west-2.amazonaws.com/dev
{"jets_version":"3.0.17","ruby_version":"2.7.4"}
real    0m0.077s
user    0m0.007s
sys     0m0.006s
~/environment/demo17 $ time curl https://jnhxcpxcrk.execute-api.us-west-2.amazonaws.com/dev
{"jets_version":"3.0.17","ruby_version":"2.7.4"}
real    0m0.091s
user    0m0.004s
sys     0m0.009s
~/environment/demo17 $ 

Wondering what's different here. Wondering if you can try deploying both the demo apps above to your AWS account, and see if we can eliminate if it's a specific environmental or app issue.

tongueroo commented 3 years ago

Closing for https://community.boltops.com/t/jets-3-0-17-extreme-performance-issue/767/2

tlhampton13 commented 3 years ago

I disabled prewarm and the increase in average duration seems to have resolved. Perhaps there is some problem with the prewarm feature

tongueroo commented 3 years ago

Re-opening. Think it has to do with rate-limiting and prewarming with large apps, 175 functions or more.

Believe the pre-warming loops through Lambda functions and that does trigger AWS api calls and triggering rate-limit related issues. So think will have to cache listing of the lambda functions. Will have to dig into this theory.

Looks like can also hook into the retry logic with a hook as a Ruby lambda block https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/timeout-duration.html

Maybe that can be hooked into to see if this is what’s happening. Unsure when will get to this. Will consider PRs. Of course, no sweat either way.

tongueroo commented 11 months ago

Thinking Jets 5 handles this since there’s only one Lambda function for all controllers.

https://blog.boltops.com/2023/12/05/jets-5-improvements-galore/